Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
3,400 | 4,079 | Decomposing Isotonic Regression for Efficiently
Solving Large Problems
Ronny Luss
Dept. of Statistics and OR
Tel Aviv University
[email protected]
Saharon Rosset
Dept. of Statistics and OR
Tel Aviv University
[email protected]
Moni Shahar
Dept. of Electrical Eng.
Tel Aviv University
[email protected]
Abstract
A new algorithm for isotonic regression is presented based on recursively partitioning the solution space. We develop efficient methods for each partitioning
subproblem through an equivalent representation as a network flow problem, and
prove that this sequence of partitions converges to the global solution. These network flow problems can further be decomposed in order to solve very large problems. Success of isotonic regression in prediction and our algorithm?s favorable
computational properties are demonstrated through simulated examples as large
as 2 ? 105 variables and 107 constraints.
1 Introduction
Assume we have a set of n data observations (x1 , y1 ), ..., (xn , yn ), where x ? X (usually X =Rp )
is a vector of covariates or independent variables, y ? R is the response, and we wish to fit a
model f? : X ? R to describe the dependence of y on x, i.e., y ? f?(x). Isotonic regression is a
non-parametric modeling approach which only restricts the fitted model to being monotone in all
independent variables [1]. Define G as the family of isotonic functions, that is, g ? G satisfies
x1 x2 ? g(x1 ) ? g(x2 ),
where the partial order here will usually be the standard Euclidean one, i.e., x1 x2 if x1j ? x2j
?j. Given these definitions, isotonic regression solves
f? = arg min ky ? g(x)k2 .
(1)
g?G
As many authors have noted, the optimal solution to this problem comprises a partitioning of the
space X into regions obeying a monotonicity property with a constant fitted to f? in each region.
It is clear that isotonic regression is a very attractive model for situations where monotonicity is a
reasonable assumption, but other common assumptions like linearity or additivity are not. Indeed,
this formulation has found useful applications in biology [2], medicine [3], statistics [1] and psychology [4], among others. Practicality of isotonic regression has already been demonstrated in
various fields and in this paper we focus on algorithms for computing isotonic regressions on large
problems.
An equivalent formulation of L2 isotonic regression seeks an optimal isotonic fit y?i at every point
by solving
n
X
minimize
(?
yi ? yi )2
(2)
i=1
subject to y?i ? y?j
?(i, j) ? I
where I denotes a set of isotonic constraints. This paper assumes that I contains no redundant
constraints, i.e. (i, j), (j, k) ? I ? (i, k) 6? I. Problem (2) is a quadratic program subject to
1
simple linear constraints, and, according to a literature review, appears to be largely ignored due to
computational difficulty on large problems. The worst case O(n4 ) complexity (a large overstatement
in practice as will be shown) has resulted in overlooking the results that follow [5, 6].
The discussion of isotonic regression originally focused on the case x ? R, where denoted a complete order [4]. For this case, the well known pooled adjacent violators algorithm (PAVA) efficiently
solves the isotonic regression problem. For the partially ordered case, many different algorithms
have been developed over the years, with most early efforts concentrated on generalizations of PAVA
[7, 5]. These algorithms typically have no polynomial complexity guarantees and are impractical
when data size exceed a few thousand observations. Problem (1) can also be treated as a separable quadratic program subject to simple linear equality constraints. Such was done, for example,
in [8], which applies active set methods to solve the problem. While such algorithms can often be
efficient in practice, the algorithm of [8] gives no complexity guarantees. Related algorithms in [9]
to those described here were applied to problems for scheduling reorder intervals in production systems and are of complexity O(n4 ) and connections to isotonic regression can be made through [1].
Interior point methods are another tool for solving Problem (1), and have time complexity guarantees of O(n3 ) when the number of constraints is on the same order as the number of variables (see
[10]). However, the excessive memory requirements of interior point methods from solving large
systems of linear equations typically make them impractical for large data sizes. Recently, [6] and
[11] gave an O(n2 ) approximate generalized PAVA algorithm, however solution quality can only be
demonstrated via experimentation. An even better complexity of O(n log n) can be obtained for the
optimal solution when the isotonic constraints take a special structure such as a tree, e.g. [12].
1.1 Contribution
Our novel approach to isotonic regression offers an exact solution of (1) with a complexity bounded
by O(n4 ), but acts on the order of O(n3 ) for practical problems. We demonstrate here that it accommodates problems with tens of thousands of observations, or even more with our decomposition. The
main goal of this paper is to make isotonic regression a reasonable computational tool for large data
sets, as the assumptions in this framework are very applicable in real-world applications. Our framework solves quadratic programs with 2 ? 105 variables and more than 107 constraints, a problem
of size not solved anywhere in previous isotonic regression literature, and with the decomposition
detailed below, even larger problems can be solved.
The paper is organized as follows. Section 2 describes a partitioning algorithm for isotonic regression and proves convergence to the globally optimal solution. Section 3 explains how the subproblems (creating a single partition) can be solved efficiently and decomposed in order to solve
large-scale problems. Section 4 demonstrates that the partitioning algorithm is significantly better
in practice than the O(n4 ) worst-case complexity. Finally, Section 5 gives numerical results and
demonstrates favorable predictive performance on large simulated data sets and Section 6 concludes
with future directions.
Notation
1 P
The weight of a set of points A is defined as y A = |A|
i?A yi . A subset U of A is an upper set
of A if x ? U, y ? A, x ? y ? y ? U. A set B ? A is defined as a block of A if y U ?B ? y B
for each upper set U of A such that U ? B 6= {}. A general block A is considered a block of the
entire space. For two blocks A and B, we denote A B if ?x ? A, y ? B such that x y and
?x ? A, y ? B such that y x (i.e. there is at least one comparable pair of points that satisfy the
direction of isotonicity). A and B are then said to be isotonic blocks (or obey isotonicity). A group
of nodes X majorizes (minorizes) another group Y if X Y (X Y ). A group X is a majorant
(minorant) of X ? A where A = ?ki=1 Ai if X 6 Ai (X 6 Ai ) ?i = 1 . . . k.
2 Partitioning Algorithm
We first describe the structure of the classic L2 isotonic regression problem and continue to detail
the partitioning algorithm. The section concludes by proving convergence of the algorithm to the
globally optimal isotonic regression solution.
2
2.1 Structure
Problem (2) is a quadratic program subject to simple linear constraints. The structure of the optimal solution to (2) is well-known. Observations are divided into k groups where the fits in each
group take the group mean observation value. This can be seen through the equations given by the
following Karush-Kuhn-Tucker (KKT) conditions:
1 X
(a) y?i = yi ? (
2
X
?ij ?
j:(i,j)?I
?ji )
j:(j,i)?I
(b) y?i ? y?j ?(i, j) ? I
(c) ?ij ? 0 ?(i, j) ? I
(d) ?ij (?
yi ? y?j ) = 0 ?(i, j) ? I.
This set of conditions exposes the nature of the optimal solution, since condition (d) implies that
?ij > 0 ? y?i = y?j . Hence ?ij can be non-zero only within blocks in the isotonic solution which
have the same fitted value. For observations in different blocks, ?ij = 0. Furthermore, the fit within
each block is trivially seen to be the average of the observations in the block, i.e. the fits minimize the
block?s squared loss. Thus, we get the familiar characterization of the isotonic regression problem
as one of finding a division into isotonic blocks.
2.2 Partitioning
In order to take advantage of the optimal solution?s structure, we propose solving the isotonic regression problem (2) as a sequence of subproblems that divides a group of nodes into two groups
at each iteration. An important property of our partitioning approach is that nodes separated at one
iteration are never rejoined into the same group in future iterations. This gives a clear bound on the
total number of iterations in the worst case.
We now describe the partitioning criterion used for each subproblem. Suppose a current block V is
optimal and thus y?i? = P
y V ?i ? V. From condition (a) of the KKT conditions, we define the net
outflow of a group V as i?V (yi ? y?i ). Finding two groups within V such that the net outflow from
the higher group is greater than the net outflow from the lower group should be infeasible, according
to the KKT conditions. The partition here looks for two such groups. Denote by CV the set of all
feasible (i.e. isotonic) cuts through the network defined by nodes in V. A cut is called isotonic if the
two blocks created by the cut are isotonic. The optimal cut is determined as the cut that solves the
problem
X
X
max
(yi ? y V ) ?
(yi ? y V )
(3)
c?CV
i?Vc?
i?Vc+
Vc? (Vc+ )
where
is the group on the lower (upper) side of the edges of cut c. In terms of isotonic
regression, the optimal cut is such that the difference in the sum of the normalized fits (yi ? y V ) at
each node of a group is maximized. If this maximized difference is zero, then the group must be an
optimal block. The optimal cut problem (3) can also be written as the binary program
P
maximize
i xi (yi ? y V )
subject to xi ? xj
?(i, j) ? I
(4)
xi ? {?1, +1} ?i ? V.
Well-known results from [13] (due to the fact that the constraint matrix is totally unimodular) say
that the following relaxation to this binary program is optimal with x? on the boundary, and hence
the optimal cut can be determined by solving the linear program
maximize z T x
subject to xi ? xj
?(i, j) ? I
?1 ? xi ? 1 ?i ? V
(5)
where zi = yi ? y V . This group-wise partitioning operation is the basis for our partitioning algorithm which is explicitly given in Algorithm 1. It starts with all observations as one group (i.e.,
V = {1, . . . , n}), and recursively splits each group optimally by solving subproblem (5). At each
3
iteration, a list C of potential optimal cuts for each group generated thus far is maintained, and the
cut among them with the highest objective value is performed. The list C is updated with the optimal cuts in both sub-groups generated. Partitioning ends whenever the solution to (5) is trivial (i.e.,
no split is found because the group is a block). As proven next, this algorithm terminates with the
optimal global (isotonic) solution to the isotonic regression problem (2).
Algorithm 1 Paritioning Algorithm
Require: Observations y1 , . . . , yn and partial order I.
Require: V = {{1, . . . , n}}, C = {(0, {1, . . . , n}, {})}, W = {}.
1: while V =
6 {} do
2:
Let (val, w? , w+ ) ? C be the potential cut with largest val.
3:
Update V = (V \ (w? ? w+ )) ? {w? , w+ }, C = C \ (val, w? , w+ ) .
4:
for all v ? {w? , w+ } do
5:
Set zi = yi ? yv ?i ? v where y v is the mean of observations in v.
6:
Solve LP (5) with input z and get x? .
7:
if x?1 = . . . = x?n (group is optimally divided) then
8:
Update V = V \ v and W = W ? v.
9:
else
10:
Let v ? = {i : x?i = ?1}, v + = {i : x?i = +1}.
11:
Update C = C ? {(z T x? , v ? , v + )}
12:
end if
13:
end for
14: end while
15: return W the optimal groups
2.3 Convergence
Theorem 1 next states the main result that allows for a no-regret partitioning algorithm for isotonic
regression. This will lead to our convergence result. We assume that group V is isotonic (i.e. has no
holes) and is the union of optimal blocks.
Theorem 1 Assume a group V is a union of blocks from the optimal solution to problem (2). Then
a cut made by solving (5) does not cut through any block in the global optimal solution.
Proof. The following is a brief sketch of the proof idea: Let M be the union of K optimal blocks in
V that get broken by the cut. Define M1 (MK ) to be a minorant (majorant) block in M. For each Mk
define MkL (MkU ) as the groups in Mk below (above) the algorithm cut. Using the definitions of how
the algorithm makes partitions, the following two consequences can be proven: (1) y M1 < y MK by
optimality (i.e. according to KKT conditions) and isotonicity and (2) y M1 > y V and yMK < yV .
This is proven by showing that yM1U > y V , because otherwise the M1U block would be on the
lower side of the cut, resulting in M1 being on the lower side of the cut, and thus y M1 > yV since
y M1L > y M1U by the optimality assumption on block M1 (with symmetric arguments for MK ). This
leads to the contradiction yV < y M1 < y MK < y V , and hence M must be empty.
Since Algorithm 1 starts with V = {1, ..., n} which is a union of (all) optimal blocks, we can
conclude from this theorem that partitions never cut an optimal block. The following corollary is
then a direct consequence of repeatedly applying Theorem 1 in Algorithm 1:
Corollary 2 Algorithm 1 converges to the global optimal solution of (2) with no regret (i.e. without
having to rejoin observations that are divided at a previous iteration).
3 Efficient solutions of the subproblems
Linear program (5) has a special structure that can be taken advantage of in order to solve larger
problems faster. We first show why these problems can be solved faster than typical linear programs,
followed by a novel decomposition of the structure that allows problems of extremely large size to
be solved efficiently.
4
3.1 Network flow problems
The dual to Problem (2) is a network flow problem with quadraticP
objective. The network flow
n
constraints are identical to those in (6) below, but the objective is 14 i=1 (s2i + t2i ), which, to the
author?s knowledge, currently still precludes this dual from being efficiently solved with special
network algorithms.
While this structure does not help solve directly the quadratic program, the network structure allows
the linear program for the subproblems to be solved very efficiently. The dual program to (5) is
X
minimize
(si + ti )
i?V
X
X
(6)
subject to
?ij ?
?ji ? si + ti = zi ?i ? V
j:(i,j)?I
j:(j,i)?I
?, s, t ? 0
where again zi = yi ? y V . Linear program (6) is a network flow problem with |V| + 2 nodes and
|I| + 2|V| arcs. Variable s denotes links directed from a source node into each other node, while
t denotes links connecting each node into a sink node. The network flow problem here minimizes
the total sum of flow over links from the source and into the sink with the goal to leave zi units of
flow at each node i ? V. Note that this is very similar to the network flow problem solved in [14]
where zi there represents the classification performance on node i. Specialized simplex methods for
such network flow problems are typically much faster ([15] documents an average speedup factor
of 10 to 100 over standard simplex solvers) due to several reasons such as simpler operations on
network data structures rather than maintaining and operating on the simplex tableau (see [16] for
an overview of network simplex methods).
3.2 Large-scale decompositions
In addition to having a very efficient method for solving this network flow problem, further enhancements can be made on extremely large problems of similar structure that might suffer from memory
problems. It is already assumed that no redundant arcs exist in I (i.e. (i, j), (j, k) ? I ? (i, k) 6?
I). One simple reduction involves eliminating negative (positive) nodes, i.e. nodes with zi < 0
(zi ? 0) where where zi = yi ? y V , that are bounded only from above (below). It is trivial to
observe that these nodes will be be equal to ?1 (+1) in the optimal solution and that eliminating
them does not affect solving (5) without them. However, in practice, this trivial reduction has a
computationally minimal affect on large data sets. These reductions were also discussed in [14].
We next consider a novel reduction for the primal linear program (5). The main idea is that it can
be solved through a sequence of smaller linear programs that reduce the total size of the full linear
program on each iteration. Consider a minorant group of nodes J ? V and the subset of arcs
IJ ? I connecting them. Solving problem (5) on this reduced network with the original input z
divides the nodes in J into a lower and upper group, denoted JL and JU . Nodes in JL are not
bounded from above and will be in the lower group of the full problem solved on V. In addition,
the same problem solved on the remaining nodes in V \ JL will give the optimal solutions to these
nodes. This is formalized in Proposition 3.
Proposition 3 Let J ? V be a minorant group of nodes in V. Let w? and x? be optimal solutions
to Problem (5) on the reduced set J and full set V of nodes, respectively. If wi? = ?1, then x?i = ?1
?i ? J . The optimal solution for the remaining nodes (V \ J ) can be found by solving (5) over only
those nodes. The same claims can be made when J ? V is a majorant group of nodes in V where
instead wi? = +1 ? x?i = +1 ?i ? J .
? = V \ W. Clearly, the solution to
Proof. Denote W the set of nodes such that wi? = ?1 and W
Problem (5) over nodes in W has the solution with all variables equal to ?1. Problem (5) can be
written in the following form with separable objective:
X
X
maximize
zi xi +
zi xi
i?W
subject to
i?V\W
xi ? xj
xi ? xj
?1 ? xi ? 1
?(i, j) ? I, i, j ? W
?(i, j) ? I, i ? V, j ? V \ W
?i ? V
5
(7)
Start with an initial solution xi = 1 ?i ? V. Variables in W can be optimized over first and by
?
assumption have the optimal value with all variables equal to ?1. Optimization over variables in W
is not bounded from below by variables in W since those variables are all at the lower bound. Hence
? is given by optimizing over only these variables. The result
the optimal solution to variables in W
for minorant groups follows. The final claim is easily argued in the same way as for the minorant
groups.
Given Proposition 3, Algorithm 2, which iteratively solves (5), can be stated. The subtrees are built
as follows. First, an upper triangular adjacency matrix C can be constructed to represent I, where
Cij = 1 if xi ? xj is an isotonic constraint and Cij = 0 otherwise. A minorant (majorant) subtree
with k nodes is then constructed as the upper left (lower right) k ? k sub-matrix of C.
Algorithm 2 Iterative algorithm for linear program (5)
Require: Observations y1 , . . . , yn and partial order I.
Require: M AXSIZE of problem to be solved by general LP solver
Require: V = {1, . . . , n}, L = U = {}.
1: while |V| ? M AXSIZE do
2:
ELIMINATE A MINORANT SET OF NODES:
3:
Build a minorant subtree T .
4:
Solve linear program (5) on T and get solution y? ? {?1, +1}|T | .
5:
L = L ? {v ? T : y?v = ?1}, V = V \ {v ? T : y?v = ?1}.
6:
ELIMINATE A MAJORANT SET OF NODES:
7:
Build majorant subtree T .
8:
Solve linear program (5) on T and get solution y? ? {?1, +1}|T | .
9:
U = U ? {v ? T : y?v = +1}, V = V \ {v ? T : y?v = +1}.
10: end while
11: Solve linear program (5) on V and get solution y? ? {?1, +1}|V|.
12: L = L ? {v ? T : y?v = ?1}, U = U ? {v ? T : y?v = +1}.
The computational bottleneck of Algorithm 2 is solving linear program (5), which is done efficiently
by solving the dual network flow problem (6). This shows that, if the first network flow problem is
too large to solve, it can be solved by a sequence of smaller network flow problems as illustrated
in Figure 1. Lemma 4 below proves that this reduction optimally solves the full problem (5). In
the worst case, many network flow problems will be solved until the original full-size network
flow problem is solved. However, in practice on large problems, this artifact is never observed.
Computational performance of this reduction is demonstrated in Section 5.
Lemma 4 Algorithm 2 optimally solves Problem (5).
Proof. The result follows from repeated application of Proposition 3 over the set of nodes V that has
not yet been optimally solved for.
4 Complexity of the partitioning algorithm
Linear program (5) can be solved in O(n3 ) using interior point methods. Given that the algorithm
performs at most n iterations, the worst case complexity of Algorithm 1 is O(n4 ). However, the
practical complexity of IRP is significantly better than the worst case. Each iteration of LP (5)
solves smaller problems. Consider the case of balanced partitioning at each iteration until there
are n final blocks. In this case, we can represent the partitioning path as a binary tree with log n
levels, and at each level k, LP (5) is solved 2k times on instances of size 2nk which leads to a total
complexity of
log
log
Xn
Xn 1
n
1 ? .25log n+1
( )k ) = n3 (
),
2k ( k )3 = n3 (
2
4
.75
k=0
k=0
subject to additional constants. For n ? 10, the summation is approximately 1.33, and hence in this
case the partitioning algorithm has complexity O(1.33n3 ) (considering the complexity of interior
6
1
1
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
0
0.05
0.1
0.15
0.2
0
0.05
0.1
0.15
0.2
0.2
0.1
0
0.05
0.1
0.15
0.2
0
1
1
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0.05
0.1
0.15
0.2
0
0
0.05
0.1
0.15
0
0.2
0
0.05
0.1
0.15
0.2
0
0.05
0.1
0.15
0.2
0.2
0.1
0
0.05
0.1
0.15
0.2
0
Figure 1: Illustration of LP (5) decomposition. Data here is 2 dimensional with only 1000 nodes in order to
leave a clear picture. First 7 iterations and the final iteration 16 of the decomposition are shown from left to
right and top to bottom. The remaining nodes (blue circles) to identify as ?1 decreases through the iterations.
LP (5) solved on the entire set of nodes in the first picture may be too large for memory. Hence subproblems are
solved on the lower left (red dots) and upper right (green dots) of the networks and some nodes are fixed from
the solution of these subproblems. This is repeated until the number of unidentified nodes in the last iteration
is of small enough size for memory. Note that at each iteration the three groups obey isotonicity.
point methods for partitioning). More generally, let p and 1 ? p be the percentages on each split.
Table 1 displays the constants c representing the complexity from O(cn3 ) over varying p and n. As
demonstrated, the problem size rapidly decreases and the complexity is in practice O(n3 ).
p=0.55
p=0.65
p=0.75
p=0.85
p=0.95
n=100
1.35n3
1.46n3
1.77n3
2.56n3
6.41n3
n=1000
1.35n3
1.46n3
1.78n3
2.61n3
6.94n3
n=10000
1.35n3
1.47n3
1.78n3
2.61n3
7.01n3
Table 1: Complexity: Groups are split with ratio p at each iteration. Complexity in practice is O(n3 ).
5 Numerical experiments
We here demonstrate that exact isotonic regression is computationally tractable for very large problems, and compare against the time it takes to get an approximation. We first show the computational
performance of isotonic regression on simulated data sets as large as 2 ? 105 training points with
more than 107 constraints. We then show the favorable predictive performance of isotonic regression
on large simulated data sets.
5.1 Large-Scale Computations
Figure 2 demonstrates that the partitioning algorithm with decompositions of the partitioning step
can solve very large isotonic regressions. Three dimensional data is simulated from U(0, 2) and the
responses are created as linear functions plus noise. The size of the training sets varies from 104
to 2 ? 105 points. The left figure shows that the partitioning algorithm finds the globally optimal
isotonic regression solution in not much more time than it takes to find an approximation as done
in [6] for very large problems. Although the worst-case complexity of our exact algorithm is much
worse, the two algorithms scale comparably in practice.
Figure 2 (right) shows how the number of partitions (left axis) increases as the number of training
points increases. It is not clear why the approximation in [6] has less partitions as the size of the
problem grows. More partitions (left axis) require solving more network flow problems, however,
as discussed, they reduce in size very quickly over the partitioning path, resulting in the practical
complexity seen in the figure on the left. The bold black line also shows the number of constraints
(right axis) which goes up to more than 107 constraints.
7
# Partitions vs # Training Points
Time vs # Training Points
450
IRP
GPAV
400
300
250
200
150
100
50
5000
7
6
4000
5
3000
4
3
2000
2
1000
1
IRP
GPAV
0
0.5
1
1.5
Number Training Points
Number Constraints
8
Number Partitions
Time (seconds)
9
6000
350
0
6
x 10
10
7000
0
2
5
x 10
0
0.5
1
1.5
Number Training Points
2
0
5
x 10
Figure 2: IRP performance on large-scale simulations. Data x ? R3 has xi ? U(0, 2). Responses y are linear
functions plus noise. Number of training points varies from 104 to 2 ? 105 . Results shown are averages of 5
simulations with dotted lines at ? one standard deviation. Time (seconds) versus number of training points is
on the left. On the right, the number of partitions is illustrated using the left axis and the bold black line shows
the average number of constraints per test using the right axis.
5.2 Predictive Performance
Here we show that isotonic regression is a useful tool when the data
Q fits the monotonic framework.
Data is simulated as above and responses are constructed as yi = i xi + N (0, .52 ) where p varies
from 2 to 6. The training set varies from 500 to 5000 to 50000 points and the test size is fixed at 5000.
Results are averaged over 10 trials and 95% confidence intervals are given. A comparison is made
between isotonic regression and linear least squares regression. With only 500 training points, the
model is poorly fitted and a simple linear regression performs much better. 5000 training points is
sufficient to fit the model well with up to 4 dimensions, after which linear regression outperforms the
isotonic regression, and 50000 training points fits the model well up with up to 5 dimensions. Two
trends are observed. Larger training sets allow better models to be fit which improves performance
while higher dimensions increase overfitting which, in turn, decreases performance.
Dim
2
3
4
5
6
IRP MSE
n=500
0.69 ? 0.01
0.76 ? 0.03
1.45 ? 0.08
4.61 ? 0.65
12.89 ? 1.30
LS MSE
n=500
0.37 ? 0.00
0.65 ? 0.01
1.08 ? 0.01
1.76 ? 0.02
3.06 ? 0.04
IRP MSE
n=5000
0.27 ? 0.00
0.31 ? 0.00
0.61 ? 0.02
2.61 ? 0.16
8.41 ? 1.36
LS MSE
n=5000
0.36 ? 0.00
0.61 ? 0.01
1.08 ? 0.02
1.88 ? 0.04
2.84 ? 0.07
IRP MSE
n=50000
0.25 ? 0.00
0.26 ? 0.00
0.34 ? 0.01
0.93 ? 0.04
3.37 ? 0.06
LS MSE
n=50000
0.36 ? 0.00
0.62 ? 0.00
1.06 ? 0.03
1.86 ? 0.05
2.83 ? 0.12
Q
Table 2: Statistics for simulation generated with yi = i xi + N (0, .52 ). A comparison between the results of
IRP and a least squares linear regression is shown. Bold demonstrates statistical significance at 95% confidence.
6 Conclusion
This paper demonstrates that isotonic regression can be used to solve extremely large problems. Fast
approximations are useful, however, as shown, globally optimal solutions are also computationally
tractable. Indeed, isotonic regression as done here performs with a complexity of O(n3 ) in practice.
As also shown, isotonic regression performs well at reasonable dimensions, but suffers from overfitting as the dimension of the data increases. Extensions of this algorithm will analyze the path of
partitions in order to control overfitting by stopping the algorithm early. Statistical complexity of
the models generated by partitioning will be examined. Furthermore, similar results will be made
for isotonic regression with different loss functions.
8
References
[1] R.E. Barlow and H.D. Brunk. The isotonic regression problem and its dual. Journal of the American
Statistical Association, 67(337):140?147, 1972.
[2] G. Obozinski, G. Lanckriet, C. Grant, M.I. Jordan, and W.S. Noble. Consistent probabilistic outputs for
protein function prediction. Genome Biology, 9:247?254, 2008. Open Access.
[3] M.J. Schell and B. Singh. The reduced monotonic regression method. Journal of the American Statistical
Association, 92(437):128?135, 1997.
[4] J.B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1), 1964.
[5] H. Block, S. Qian, and A. Sampson. Structure algorithms for partially ordered isotonic regression. Journal
of Computational and Graphical Statistcs, 3(3):285?300, 1994.
[6] O. Burdakov, O. Sysoev, A. Grimvall, and M. Hussian. An o(n2 ) algorithm for isotonic regression. 83:25?
83, 2006. In: G. Di Pillo and M. Roma (Eds) Large-Scale Nonlinear Optimization. Series: Nonconvex
Optimization and Its Applications.
[7] C.-I. C. Lee. The min-max algorithm and isotonic regression. The Annals of Statistics, 11(2):467?477,
1983.
[8] J. de Leeuw, K. Hornik, and P. Mair. Isotone optimization in r: Pool-adjacent-violators algorithm (pava)
and active set methods. 2009. UC Los Angeles: Department of Statistics, UCLA. Retrieved from:
http://cran.r-project.org/web/packages/isotone/vignettes/isotone.pdf.
[9] W.L. Maxwell and J.A. Muckstadt. Establishing consistent and realistic reorder intervals in productiondistribution systems. Operations Research, 33(6):1316?1341, 1985.
[10] R.D.C. Monteiro and I. Adler. Interior path following primal-dual algorithms. part II: Convex quadratic
programming. Mathematical Programming, 44:43?66, 1989.
[11] O. Burdakov, O. Sysoev, and A. Grimvall. Generalized PAV algorithm with block refinement for partially
ordered monotonic regression. pages 23?37, 2009. In: A. Feelders and R. Potharst (Eds.) Proc. of the
Workshop on Learning Monotone Models from Data at the European Conference on Machine Learning
and Principles and Practice of Knowledge Discovery in Databases.
[12] P.M. Pardalos and G. Xue. Algorithms for a class of isotonic regression problems. Algorithmica, 23:211?
222, 1999.
[13] K.G. Murty. Linear Programming. John Wiley & Sons, Inc., 1983.
[14] R. Chandrasekaran, Y.U. Ryu, V.S. Jacob, and S. Hong. Isotonic separation. INFORMS Journal on
Computing, 17(4):462?474, 2005.
[15] MOSEK ApS. The MOSEK optimization tools manual. version 6.0, revision 61. 2010. Software available
at http://www.mosek.com.
[16] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms, and Applications.
Prentice-Hall, Inc., 1993.
9
| 4079 |@word trial:1 version:1 eliminating:2 polynomial:1 open:1 seek:1 simulation:3 eng:2 decomposition:7 jacob:1 recursively:2 reduction:6 initial:1 contains:1 series:1 leeuw:1 document:1 outperforms:1 current:1 com:2 si:2 gmail:1 yet:1 must:2 written:2 john:1 numerical:2 partition:12 realistic:1 update:3 aps:1 v:2 isotone:3 characterization:1 node:36 org:1 simpler:1 mathematical:1 constructed:3 direct:1 prove:1 magnanti:1 indeed:2 globally:4 decomposed:2 solver:2 totally:1 considering:1 psychometrika:1 project:1 linearity:1 bounded:4 notation:1 revision:1 minimizes:1 developed:1 finding:2 impractical:2 guarantee:3 every:1 multidimensional:1 act:1 ti:2 unimodular:1 k2:1 demonstrates:5 partitioning:24 unit:1 control:1 grant:1 yn:3 positive:1 consequence:2 establishing:1 path:4 approximately:1 might:1 plus:2 black:2 examined:1 averaged:1 directed:1 practical:3 practice:10 block:27 regret:2 union:4 significantly:2 murty:1 confidence:2 protein:1 get:7 interior:5 scheduling:1 unidentified:1 ronny:1 applying:1 prentice:1 isotonic:54 www:1 equivalent:2 demonstrated:5 go:1 l:3 convex:1 focused:1 formalized:1 qian:1 contradiction:1 classic:1 proving:1 updated:1 annals:1 suppose:1 exact:3 programming:3 hypothesis:1 lanckriet:1 trend:1 cut:20 database:1 observed:2 bottom:1 subproblem:3 t2i:1 electrical:1 solved:20 worst:7 thousand:2 region:2 decrease:3 highest:1 balanced:1 broken:1 complexity:22 covariates:1 singh:1 solving:15 predictive:3 division:1 basis:1 sink:2 easily:1 various:1 overlooking:1 s2i:1 additivity:1 separated:1 fast:1 describe:3 larger:3 solve:12 say:1 otherwise:2 precludes:1 triangular:1 statistic:6 final:3 sequence:4 advantage:2 net:3 propose:1 rapidly:1 poorly:1 mair:1 ky:1 los:1 convergence:4 empty:1 requirement:1 enhancement:1 converges:2 leave:2 help:1 develop:1 ac:2 informs:1 ij:8 solves:8 involves:1 implies:1 direction:2 kuhn:1 vc:4 pardalos:1 adjacency:1 explains:1 require:6 argued:1 cn3:1 generalization:1 karush:1 proposition:4 summation:1 extension:1 considered:1 hall:1 claim:2 kruskal:1 early:2 favorable:3 proc:1 applicable:1 currently:1 expose:1 largest:1 tool:4 clearly:1 rather:1 feelders:1 varying:1 corollary:2 focus:1 dim:1 irp:8 stopping:1 typically:3 entire:2 eliminate:2 monteiro:1 arg:1 among:2 dual:6 classification:1 denoted:2 special:3 uc:1 field:1 equal:3 never:3 having:2 biology:2 identical:1 represents:1 look:1 excessive:1 noble:1 mosek:3 future:2 simplex:4 others:1 few:1 resulted:1 familiar:1 algorithmica:1 primal:2 subtrees:1 edge:1 partial:3 tree:2 euclidean:1 divide:2 circle:1 minimal:1 fitted:4 mk:6 instance:1 modeling:1 goodness:1 deviation:1 subset:2 too:2 optimally:5 varies:4 xue:1 rosset:1 adler:1 ju:1 probabilistic:1 lee:1 majorizes:1 pool:1 connecting:2 quickly:1 squared:1 again:1 worse:1 creating:1 american:2 return:1 potential:2 de:1 pooled:1 bold:3 inc:2 satisfy:1 explicitly:1 performed:1 analyze:1 red:1 start:3 yv:4 orlin:1 contribution:1 il:2 minimize:3 square:2 largely:1 efficiently:7 maximized:2 identify:1 comparably:1 lu:1 suffers:1 whenever:1 ed:2 manual:1 definition:2 against:1 tucker:1 proof:4 di:1 knowledge:2 improves:1 organized:1 nonmetric:1 x1j:1 appears:1 maxwell:1 originally:1 higher:2 ymk:1 follow:1 response:4 brunk:1 formulation:2 done:4 furthermore:2 anywhere:1 until:3 sketch:1 cran:1 web:1 nonlinear:1 mkl:1 artifact:1 quality:1 grows:1 aviv:3 normalized:1 barlow:1 equality:1 hence:6 symmetric:1 iteratively:1 illustrated:2 attractive:1 adjacent:2 pav:1 maintained:1 noted:1 criterion:1 generalized:2 hong:1 pdf:1 m1l:1 complete:1 demonstrate:2 performs:4 saharon:2 wise:1 novel:3 recently:1 common:1 specialized:1 ji:2 overview:1 jl:3 discussed:2 association:2 m1:7 ai:3 cv:2 trivially:1 dot:2 access:1 moni:2 operating:1 retrieved:1 optimizing:2 nonconvex:1 shahar:1 success:1 continue:1 binary:3 yi:16 seen:3 greater:1 additional:1 maximize:3 redundant:2 ii:1 full:5 faster:3 offer:1 divided:3 post:1 prediction:2 regression:46 iteration:16 represent:2 addition:2 interval:3 else:1 source:2 subject:9 flow:19 jordan:1 exceed:1 split:4 enough:1 xj:5 fit:11 psychology:1 gave:1 zi:11 affect:2 reduce:2 idea:2 angeles:1 bottleneck:1 effort:1 suffer:1 repeatedly:1 ignored:1 useful:3 generally:1 detailed:1 clear:4 ten:1 concentrated:1 reduced:3 http:2 exist:1 restricts:1 percentage:1 dotted:1 per:1 blue:1 group:37 relaxation:1 monotone:2 year:1 sum:2 package:1 family:1 reasonable:3 chandrasekaran:1 separation:1 scaling:1 comparable:1 ki:1 bound:2 followed:1 display:1 quadratic:6 constraint:17 x2:3 n3:24 software:1 ucla:1 argument:1 min:2 optimality:2 extremely:3 separable:2 speedup:1 department:1 according:3 describes:1 terminates:1 smaller:3 son:1 wi:3 lp:6 n4:5 taken:1 computationally:3 equation:2 turn:1 r3:1 tableau:1 tractable:2 end:5 available:1 decomposing:1 operation:3 experimentation:1 obey:2 observe:1 rp:1 original:2 denotes:3 assumes:1 remaining:3 top:1 graphical:1 maintaining:1 medicine:1 practicality:1 prof:2 build:2 objective:4 already:2 parametric:1 dependence:1 said:1 link:3 simulated:6 accommodates:1 trivial:3 reason:1 rom:1 illustration:1 ratio:1 cij:2 subproblems:6 negative:1 stated:1 upper:7 observation:12 arc:3 situation:1 y1:3 pair:1 connection:1 optimized:1 ryu:1 usually:2 below:6 program:22 built:1 max:2 tau:2 memory:4 green:1 difficulty:1 treated:1 representing:1 brief:1 picture:2 axis:5 created:2 concludes:2 review:1 literature:2 l2:2 discovery:1 val:3 loss:2 proven:3 versus:1 sufficient:1 consistent:2 principle:1 production:1 last:1 infeasible:1 side:3 allow:1 boundary:1 dimension:5 xn:3 world:1 genome:1 author:2 made:6 refinement:1 far:1 approximate:1 monotonicity:2 global:4 active:2 kkt:4 overfitting:3 conclude:1 reorder:2 assumed:1 xi:15 iterative:1 why:2 table:3 nature:1 tel:3 hornik:1 mse:6 european:1 significance:1 main:3 noise:2 burdakov:2 n2:2 repeated:2 outflow:3 x1:4 ahuja:1 wiley:1 sub:2 comprises:1 wish:1 obeying:1 theorem:4 showing:1 list:2 workshop:1 subtree:3 hole:1 nk:1 ordered:3 partially:3 applies:1 monotonic:3 satisfies:1 violator:2 obozinski:1 goal:2 sampson:1 minorant:9 feasible:1 determined:2 typical:1 lemma:2 x2j:1 total:4 called:1 majorant:6 pava:4 dept:3 |
3,401 | 408 | ADAPTIVE SPLINE NETWORKS
Jerome H. Friedman
Department of Statistics and
Stanford Linear Accelerator Center
Stanford University
Stanford, CA 94305
Abstract
A network based on splines is described. It automatically adapts the number of units, unit parameters, and the architecture of the network for each
application.
1
INTRODUCTION
In supervised learning one has a system under study that responds to a set of
simultaneous input signals {Xl'" x n }. The response is characterized by a set of
output signals {Y1, Y2,"', Ym}. The goal is to learn the relationship between the
inputs and the outputs. This exercise generally has two purposes: prediction and
understanding. With prediction one is given a set of input values and wishes to
predict or forecast likely values of the corresponding outputs without having to
actually run the system. Sometimes prediction is the only purpose. Often, however,
one wishes to use the derived relationship to gain understanding of how the system
works. Such knowledge is often useful in its own right, for example in science, or it
may be used to help improve the characteristics of the system, as in industrial or
engineering applications.
The learning is accomplished by taking training data. One observes the outputs
produced by the system in response to varying sets of input values
{Y1i ... Ymi
I Xli' .. xndf -
(1)
These data (1) are then used to train an "artificial" system (usually a computer
program) to learn the input/output relationship_ The underlying framework or
model is usually taken to be
Yk
= !k(Xl- - -xn ) + fk,
k
= I,m
(2)
675
676
Friedman
with ave(fk I Xl ... xn) = O. Here (2) Yk is the kth responding output signal, fk is
a single valued deterministic function of an n-dimensional argument (inputs) and
tk is a random (stochastic) component that reflects the fact that (if nonzero) Yk
is not completely specified by the observed inputs, but is also responding to other
quantities that are neither controlled nor observed. In this framework the learning
goal is to use the training data to derive a function j(Xl '" xn) that can serve as a
reasonable approximation (estimate) of the true underlying ("target") function fk
(2). The supervised learning problem can in this way be viewed as one of function
or surface approximation, usually in high dimensions (n ? 2).
2
SPLINES
There is an extensive literature on the theory of function approximation (see Cheney
[1986] and Chui [1988], and references therein). From this literature spline methods
have emerged as being among the most successful (see deBoor [1978] for a nice introduction to spline methods). Loosely speaking, spline functions have the property
that they are the smoothest for a given flexibility and vice versa. This is important if one wishes to operate under the least restrictive assumptions concerning
fk(XI'" xn) (2), namely, that it is relatively smooth compared to the noise tk but
is otherwise arbitrary. A spline approximation is characterized by its order q [q = 1
(linear), q = 2 (quadratic), and q = 3 (cubic) are the most popular orders]. The
procedure is to first partition the input variable space into a set of disjoint regions.
The approximation l(xi ... xn) is taken to be a separate n-dimensional polynomial
in each region with maximum degree q in anyone variable, constrained so that I
and all of its derivatives to order q - 1 are continuous across all region boundaries.
Thus, a particular spline approximation is determined by a choice for q, which tends
not to be very important, and the particular set of chosen regions, which tends to
be crucial. The central problem associated with spline approximations is how to
choose a good set of associated regions for the problem at hand.
2.1
TENSOR-PRODUCT SPLINES
The most popular method for partitioning the input variable space is by the tensor
or outer product of interval sets on each of the n axes. Each input axis is partitioned
into I< + 1 intervals delineated by I< points ("knots"). The regions in the ndimensional space are taken to be the (I< + 1t intersections of all such intervals.
Figure 1 illustrates this procedure for I< = 4 knots on each of two axes producing
25 regions in the corresponding two-dimensional space.
Owing to the regularity of tensor-product representations, the corresponding spline
approximation can be represented in a simple form as a basis function expansion.
Let x = (Xl'" x n ). Then
lex)
= l: WtBt(x)
(3)
t
where {wtl are the coefficients (weights) for each respective basis function Bt(x),
and the basis function set {Bt(x)} is obtained by taking the tensor product of the
set of functions
(4)
Adaptive Spline Networks
over all of the axes, j = 1, n. That is, each of the I< + q + 1 functions on each axis j
(j 1, n) is multiplied by all of the functions (4) corresponding to all of the other
axes k (k = 1, n; k 1= j). As a result the total number of basis functions (3) defining
the tensor-product spline approximation is
=
(5)
The functions comprising the second set in (4) are known as the truncated power
functions:
< tk?3
> tkj
X?3 Xj
and there is one for each knot location tkj (k
= 1, I<) on each input axis j
(6)
(j
= 1, n).
Although conceptually quite simple, tensor-product splines have severe limitations
that preclude their use in high dimensional settings (n > > 2). These limitations
stem from the exponentially large number of basis functions that are required (5).
3) with five inputs (n
5) and only five knots per axis
For cubic splines (q
(I< 5) 59049 basis functions are required. For n 6 that number is 531441, and
for n = 10 it is approximately 3.5 x 109 ? This poses severe statistical problems
in fitting the corresponding number of weights unless the training sample is large
compared to these numbers, and computational problems in any case since the
computation grows as the number of weights (basis functions) cubed. These are
typical manifestations of the so-called "curse-of-dimensionality" (Bellman [1961])
that afflicts nearly all high-dimensional problems.
=
3
=
=
=
ADAPTIVE SPLINES
This section gives a very brief overview of an adaptive strategy that attempts
to overcome the limitations of the straightforward application of tensor-product
splines, making practical their use in high-dimensional settings. This method, called
MARS (multivariate adaptive regression splines), is described in detail in Friedman
[1991] along with many examples of its use involving both real and artificially generated data. (A FORTRAN program implementing the method is available from
the author.)
The method (conceptually) begins by generating a tensor-product partition of the
input variable space using a large number of knots, J{ < N, on each axis. Here
""
N (1) is the training sample size. This induces a very large (I< + l)n number
of regions. The procedure then uses the training data to select particular unions
of these (initially large number of) regions to define a relatively small number of
(larger) regions most suitable for the problem at hand.
This strategy is implemented through the basis function representation of spline
approximations (3). The idea is to select a relatively small subset of basis functions
{B~(x)}~
C
small
{Bl(X)}~1Uge
(7)
from the very large set (3) (4) (5) induced by the initial tensor-product partition.
The particular subset for a problem at. hand is obtained through standard statistical
variable subset selection, treating the basis functions as the "variables". At the
677
678
Friedman
first step the best single basis function is chosen. The second step chooses the basis
function that works best in conjunction with the first. At the mth step, the one
that works best with the m - 1 already selected, is chosen, and so on. The process
stops when including additional basis functions fails to improve the approximation.
3.1
ADAPTIVE SPLINE NETWORKS
This section describes a network implementation that approximates the adaptive
spline strategy described in the previous section. The goal is to synthesize a good
set of spline basis functions (7) to approximate a particular system's input/output
relationship, using the training data. For the moment, consider only one output y;
this is generalized later. The basic observation leading to this implementation is
that the approximation takes the form of sums of products of very simple functions,
namely the truncated power functions (6), each involving a single input variable,
Km
B~ (x)
= II (Xj(k) -
tkj )~,
(8)
k=l
and
M
j(x) =
L
wmB:n(x).
Here 1 <j(k) :$ n is an input variable and 1
the product (interaction level).
~ J{m
<n
(9)
is the number of factors in
The network is comprised of an ordered set of interconnected units. Figure 2 shows
a diagram of the interconnections for a (small) network. Figure 3 shows a schematic
diagram of each individual unit. Each unit has as its inputs all of the system inputs
Xl ... Xn and all of the outputs from the previous units in the network Bo . " BM. It
is also characterized by three parameters: j, f, t. The triangles in Figure 3 represent
selectors. The upper triangle selects one of the system inputs, Xj; the left triangle
selects one of the previous unit outputs, Be. These serve as inputs, along with the
parameter t, to two internal units that each produce an output. The first output
is Be . (Xj - t)~ and the second is Be . (t - Xj )~. The whole unit thereby produces
two outputs BM+l and BM+2, that are available to serve as inputs to future units.
In addition to units of this nature, there is an initial unit (Bo) that produces the
constant output Bo 1, that is also available to be selected as an input to all units.
The output of the entire network, j, is a weighted sum (9) of all of the unit outputs
(including Bo = 1). This is represented by the bottom trapezoid in Figure 2.
=
The parameters associated with the network are the number of units Nu, the parameters associated with each one
(10)
and the weights in the final adder
{Wk}~=2.Nu.
(11)
The goal of training the network is to choose values for these parameters (10) (11)
that minimize average future prediction error (squared), that is the squared error on
Adaptive Spline Networks
(test) data not used as part of the training sample. An estimate of this quantity is
provided by the generalized cross-validation model selection criterion (Craven and
Wahba [1979])
GCV =
r
~ t,<y, - J;)'; [1 - 5.N; + 1
(12)
The numerator in (12) is the average squared-error over the training data. The
denominator is an (inverse) penalty for adding units. The quantity (5.Nu+1) is just
the number of adjustable parameters in the network. This GCV criterion (12) has
its roots in ordinary (leave-one-out) cross-validation and serves as an approximation
to it (see Craven and ""ahba [1979]).
The training strategy used is a semi-greedy one. The units are considered in order.
For the mth unit the weights of all later units are set to zero, that is
where Mmax is the maximum number of units in the network. The GCV criterion
(12) is then minimized with respect to the parameters of the mth unit (fm,jm, t m ),
and the weights associated with all previous units as well as the unit under COllsideration {Wk 15m , given the parameter values associated with the previous units
{fi,ji, td~-l. This optimization can be done very rapidly, O(nm 2 N), using least
squares updating formulae (see Friedman [1991]). This process is repeated until Mmax units have been added to the network. A post optimization procedure
(weight elimination) is then applied to select an optimal subset of weights to be set
to zero, so as to minimize the GCV criterion (12). This will (usually) decrease the
GCV value since it includes a penalty for increasing the number of nonzero weights
The semi-greedy training strategy has the advantage of being quite fast. The total
computation is O(nN .M~ax) where n is the number of system inputs, N is the
training sample size, and Mmax is the maximum number of units to be included
in the network (before weight elimination). On a SUN SPARCstation, small to
moderate sized problems train in seconds to minutes, and very large ones in a
few hours. A potential disadvantage of this strategy is possible loss of prediction
accuracy compared to a more thorough optimization strategy. This tends not to
be the case. Experiments with more complete optimization seldom resulted in
even moderate improvement. This is because units added later to the network can
compensate for the suboptimal settings of parameters introduced earlier.
Figure 4 illustrates a (very small) network that might be realized with the MARS
procedure. The number above each unit is the system input that it selected. The
letter within each unit represents its knot parameter. The first unit necessarily has
as its input the constant Eo 1. Its first output goes to the final adder but was not
selected as an input to any future units. Its second output serves as the selected
input to the next two units, but was eliminated from the adder by the final weight
elimination, and so on. The final approximation for this network is
=
+ Wl(X3 - s)~ + W2(S - X3)~(X7 - t)~
+W3(S - X3)~(X2 - u)~ + W4(S - X3)~(U - X2)~(X8 J(x) = Wo
+W5(S - X3)~(U - X2)~(V - X8)~'
v)~
679
680
Friedman
Two possible network topologies that might be realized are of special interest. One
is where all units happen to select the constant line Bo
1 as their unit input. In
this case the resulting approximation will be a sum of spline functions each involving
only one input variable. This is known as an additive function (no interactions)
=
J
j(x) = Lfj(xj).
(13)
j=l
An additive function has the property that the functional dependence on any variable is independent of the values of all other input variables up to an overall additive
constant. Additive function approximations are important because many true Ullderlying functions f(x) (2) are close to additive and thus well approximated by
additive functions. MARS can realize additive functions as a subclass of its potential models.
Another potential network topology that can be realized by MARS is one in which
every unit output serves either as an input to one (and only one) other unit or goes
to the final weighted adder (but not both). This is a binary tree topology similar to
those generated by recursive partitioning strategies like CART (Breiman, Friedman,
Olshen and Stone [1984]). In fact, if one were to impose this restriction and employ
q = 0 splines, the MARS strategy reduces to that of CART (see Friedman [1991]).
Thus, MARS can also realize CART approximations as a subclass of its potential
models.
MARS can be viewed as a generalization of CART. First by allowing q > 0 splines
continuous approximations are produced. This generally results in a dramatic increase in accuracy. In addition, all unit outputs are eligible to contribute to the
final adder, not just the terminal ones; and finally, all previous unit outputs are
eligible to be selected as inputs for new units, not just the currently terminal ones.
Both additive and CART approximations have been highly successful in largely
complementary situations: additive modeling when the true underlying function is
close to additive, and CART when it dominately involves high order interactions
between the input variables. MARS unifies both into a single framework. This
lends hope that MARS will be successful at both these extremes as well as the
broad spectrum of situations in between where neither works well.
Multiple response outputs Y1'" Ym (1) (2) are incorporated in a straightforward
manner. The internal units and their interconnections are the same as described
above and shown in Figures 2 and 3. Only the final weighted adder unit (Figure 2)
is modified to incorporate a set of weights
{Wmk}~lm
for each response output (k = 1, m). The approximation for each output is
M
A(x)
=L
WmkBm,
k = 1, m.
m=O
The numerator in the GCV criterion (12) is replaced by
1
m
N
N LL(Yik - jik)2
m
k=l i=l
(14)
Adaptive Spline Networks
and it is minimized with respect to the internal network parameters (10) and all of
the weights (14).
4
DISCUSSION
This section (briefly) compares and contrasts the MARS approach with radial basis
functions and sigmoid "back-probagation" networks. An important consequence
of the MARS strategy is input variable subset selection. Each unit individually
selects the best system input so that it can best contribute to the approximation.
It is often the case that some or many of the inputs are never selected. These will be
inputs that tend to have little or no effect on the output(s). In this case excluding
them from the approximation will greatly increase statistical accuracy. It also aids
in the interpretation of the produced model. In addition to global variable subset
selection, MARS is able to do input variable subset selection locally in different
regions of the input variable space. This is a consequence of the restricted support
(nonzero value) of the basis functions produced. Thus, if in any local region, the
target function (2) depends on only a few of the inputs, MARS is able to use this to
advantage even if the relevant inputs are different in different local regions. Also,
MARS is able to produce approximations of low interaction order even if the number
of selected inputs is large.
Radial basis functions are not able to do local (or usually even global) input variable
subset selection as a part of the procedure. All basis functions involve all of the
inputs at the same relative strength everywhere in the input variable space. If the
target function (2) is of this nature they will perform well in that no competing
procedure will do better, or likely even as well. If this is not the case , radial
basis functions are not able to take advantage of the situation to improve accuracy.
Also, radially symmetric basis functions produce approximations of the highest
possible interaction order (everywhere in the input space). This results in a marked
disadvantage if the target function tends to dominately involve interactions in at
most a few of the inputs (such as additive functions (13?.
Standard networks based on sigmoidal units of linear combinations of inputs share
the properties described above for radial basis functions. Including "weight elimination" (Rumelhart [1988]) provides an (important) ability to do global (but not
local) input variable subset selection. The principal differences between MARS and
this approach center on the use of splines rather than sigmoids, and products rather
than linear combinations of the input variables. Splines tend to be more flexible in
that two spline functions can closely approximate any sigmoid whereas it can take
many sigmoids to approximate some splines. MARS' use of product expansions enables it to produce approximations that are local in nature. Local approximations
have the property that if the target function is badly behaved in any local region
of the input space, the quality of the approximation is not affected in the other regions. Also, as noted above, MARS can produce approximations of low interaction
order. This is difficult for approximations based on linear combinations.
Both radial basis functions and sigmoidal networks produce approximations that
are difficult to interpret. Even in situations where they produce high accuracy,
they provide little information concerning the nature of the target function. MARS
approximations on the other hand can often provide considerable interpretable in-
681
682
Friedman
formation. Interpreting MARS models is discussed in detail in Friedman [1991].
Finally, training MARS networks tends to be computationally much faster than
other types of learning procedures.
References
Bellman, R. E. (1961). Adaptive Control Processes. Princeton University Press,
Princeton, NJ.
Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J. (1984). Classification
and Regression Trees. Wadsworth, Belmont, CA.
Cheney, E. W. (1986). Multivariate Approximation Theory: Selected Topics.
Monograph: SIAM CBMS-NSF Regional Conference Series in Applied Mathematics, Vol. 5l.
Chui, C. K. (1988). Multivariate Splines. Monograph: SIAM CBMS-NSF Regional
Conference Series in Applied Mathematics, Vol. 54.
Craven, P. and Wahba, G. (1979). Smoothing noisy data with spline functions.
Estimating the correct degree of smoothing by the method of generalized crossvalidation. Numerische Mathematik 31 317-403.
de Boor , C. (1978). A Practical Guide to Splines. Springer-Verlag, New York, NY.
Friedman, J. H. (1991). Multivariate adaptive regression splines (with discussion).
Annals of Statistics, March.
Rumelhart, D. E. (1988). Learning and generalization. IEEE International Conference on Neural Networks, San Diego, plenary address.
Adaptive Spline Networks
FIGURE 2
G-~
FIGURE 1
fl\ca..A Aal.G.p
1. 0
~i If II.
X 1..'
S p' ,;..., IVa t~ I<
?
?
?
?
,
?
J,
X",
J,
,
,-1.L
--- "_1
J..
~
r-LL
R~
j.
r--- c ..
I
r--"r-'
,.--'-
Ja.
J,
---I_~
r-'-
JrI.
j ...
~
a.
s..
8.. 0,
a..
S.
~~.w_8_
__ 0
FIGURE 3
AoL",.ldve Se/;"", ~it:
X:a. ?
?
? ?
.
-!- "
II ..
/
t:ol
"DB.
,...= I
FIGURE 4
? ? lC,..
~
o
/'to
f-
683
| 408 |@word briefly:1 polynomial:1 km:1 dramatic:1 thereby:1 moment:1 initial:2 series:2 realize:2 belmont:1 additive:11 happen:1 partition:3 enables:1 treating:1 interpretable:1 greedy:2 selected:9 provides:1 cheney:2 location:1 contribute:2 sigmoidal:2 five:2 along:2 fitting:1 manner:1 boor:1 nor:1 ol:1 terminal:2 bellman:2 automatically:1 td:1 little:2 curse:1 preclude:1 jm:1 increasing:1 begin:1 provided:1 underlying:3 estimating:1 jik:1 sparcstation:1 iva:1 nj:1 thorough:1 every:1 subclass:2 partitioning:2 unit:43 control:1 producing:1 before:1 engineering:1 local:7 tends:5 consequence:2 approximately:1 might:2 therein:1 practical:2 union:1 recursive:1 x3:5 procedure:8 w4:1 radial:5 close:2 selection:7 restriction:1 deterministic:1 center:2 straightforward:2 go:2 numerische:1 annals:1 target:6 diego:1 us:1 synthesize:1 rumelhart:2 approximated:1 updating:1 observed:2 bottom:1 region:15 sun:1 tkj:3 decrease:1 highest:1 observes:1 yk:3 monograph:2 serve:3 completely:1 basis:22 triangle:3 represented:2 train:2 fast:1 artificial:1 formation:1 quite:2 emerged:1 stanford:3 valued:1 larger:1 otherwise:1 interconnection:2 ability:1 statistic:2 noisy:1 final:7 advantage:3 interaction:7 product:13 interconnected:1 relevant:1 rapidly:1 flexibility:1 adapts:1 wmb:1 crossvalidation:1 regularity:1 produce:9 generating:1 leave:1 tk:3 help:1 derive:1 pose:1 implemented:1 involves:1 closely:1 correct:1 owing:1 lfj:1 stochastic:1 elimination:4 implementing:1 ja:1 generalization:2 considered:1 predict:1 lm:1 purpose:2 currently:1 individually:1 wl:1 vice:1 reflects:1 weighted:3 hope:1 modified:1 rather:2 breiman:2 varying:1 conjunction:1 derived:1 ax:5 improvement:1 greatly:1 industrial:1 ave:1 contrast:1 nn:1 bt:2 entire:1 initially:1 mth:3 comprising:1 selects:3 overall:1 among:1 flexible:1 classification:1 constrained:1 special:1 smoothing:2 wadsworth:1 never:1 having:1 eliminated:1 represents:1 broad:1 nearly:1 future:3 minimized:2 ymi:1 spline:36 few:3 employ:1 resulted:1 individual:1 replaced:1 friedman:12 attempt:1 interest:1 w5:1 highly:1 severe:2 extreme:1 respective:1 unless:1 tree:2 loosely:1 plenary:1 earlier:1 modeling:1 disadvantage:2 ordinary:1 subset:9 comprised:1 successful:3 gcv:6 chooses:1 international:1 siam:2 ym:2 squared:3 central:1 nm:1 choose:2 derivative:1 leading:1 cubed:1 potential:4 de:1 wk:2 includes:1 coefficient:1 depends:1 later:3 root:1 minimize:2 square:1 accuracy:5 characteristic:1 largely:1 conceptually:2 xli:1 unifies:1 produced:4 knot:6 simultaneous:1 associated:6 gain:1 stop:1 radially:1 popular:2 knowledge:1 dimensionality:1 actually:1 back:1 cbms:2 supervised:2 response:4 done:1 mar:20 just:3 jerome:1 until:1 hand:4 adder:6 quality:1 behaved:1 grows:1 effect:1 y2:1 true:3 symmetric:1 nonzero:3 mmax:3 ll:2 numerator:2 noted:1 criterion:5 manifestation:1 generalized:3 stone:2 complete:1 interpreting:1 fi:1 sigmoid:2 functional:1 ji:1 overview:1 exponentially:1 discussed:1 interpretation:1 approximates:1 interpret:1 versa:1 seldom:1 fk:5 mathematics:2 surface:1 multivariate:4 own:1 moderate:2 verlag:1 binary:1 accomplished:1 additional:1 impose:1 eo:1 signal:3 semi:2 ii:3 multiple:1 reduces:1 stem:1 smooth:1 faster:1 characterized:3 cross:2 compensate:1 concerning:2 post:1 controlled:1 schematic:1 prediction:5 involving:3 regression:3 basic:1 denominator:1 sometimes:1 represent:1 aal:1 addition:3 whereas:1 interval:3 diagram:2 crucial:1 w2:1 operate:1 regional:2 induced:1 cart:6 tend:2 db:1 xj:6 w3:1 architecture:1 wahba:2 fm:1 suboptimal:1 topology:3 idea:1 competing:1 penalty:2 wo:1 speaking:1 york:1 yik:1 generally:2 useful:1 se:1 involve:2 locally:1 induces:1 nsf:2 disjoint:1 per:1 jri:1 vol:2 affected:1 neither:2 sum:3 run:1 inverse:1 letter:1 everywhere:2 reasonable:1 eligible:2 fl:1 quadratic:1 badly:1 strength:1 x2:3 y1i:1 chui:2 x7:1 argument:1 anyone:1 relatively:3 department:1 combination:3 craven:3 march:1 across:1 describes:1 partitioned:1 deboor:1 delineated:1 making:1 restricted:1 taken:3 computationally:1 mathematik:1 fortran:1 serf:3 available:3 multiplied:1 responding:2 restrictive:1 bl:1 tensor:9 already:1 quantity:3 lex:1 wtl:1 strategy:10 added:2 realized:3 dependence:1 responds:1 kth:1 lends:1 separate:1 outer:1 topic:1 relationship:3 trapezoid:1 difficult:2 olshen:2 implementation:2 adjustable:1 perform:1 allowing:1 upper:1 i_:1 observation:1 truncated:2 defining:1 situation:4 incorporated:1 excluding:1 y1:2 arbitrary:1 uge:1 introduced:1 namely:2 required:2 specified:1 extensive:1 hour:1 nu:3 address:1 able:5 usually:5 program:2 including:3 power:2 suitable:1 wmk:1 ndimensional:1 improve:3 brief:1 axis:5 x8:2 nice:1 understanding:2 literature:2 relative:1 loss:1 accelerator:1 limitation:3 validation:2 degree:2 share:1 guide:1 taking:2 boundary:1 dimension:1 xn:6 overcome:1 author:1 adaptive:12 san:1 bm:3 approximate:3 selector:1 global:3 xi:2 spectrum:1 continuous:2 learn:2 nature:4 ca:3 aol:1 expansion:2 necessarily:1 artificially:1 whole:1 noise:1 repeated:1 complementary:1 cubic:2 ny:1 aid:1 lc:1 fails:1 wish:3 xl:6 exercise:1 smoothest:1 formula:1 minute:1 adding:1 illustrates:2 sigmoids:2 forecast:1 intersection:1 likely:2 ordered:1 bo:5 springer:1 goal:4 viewed:2 sized:1 marked:1 considerable:1 included:1 determined:1 typical:1 principal:1 total:2 called:2 select:4 internal:3 support:1 incorporate:1 princeton:2 |
3,402 | 4,080 | Brain covariance selection: better individual
functional connectivity models using population prior
Ga?el Varoquaux?
Parietal, INRIA
NeuroSpin, CEA, France
[email protected]
Jean-Baptiste Poline
LNAO, I2BM, DSV
NeuroSpin, CEA, France
[email protected]
Alexandre Gramfort
Parietal, INRIA
NeuroSpin, CEA, France
[email protected]
Bertrand Thirion
Parietal, INRIA
NeuroSpin, CEA, France
[email protected]
Abstract
Spontaneous brain activity, as observed in functional neuroimaging, has been
shown to display reproducible structure that expresses brain architecture and carries markers of brain pathologies. An important view of modern neuroscience is
that such large-scale structure of coherent activity reflects modularity properties
of brain connectivity graphs. However, to date, there has been no demonstration that the limited and noisy data available in spontaneous activity observations
could be used to learn full-brain probabilistic models that generalize to new data.
Learning such models entails two main challenges: i) modeling full brain connectivity is a difficult estimation problem that faces the curse of dimensionality
and ii) variability between subjects, coupled with the variability of functional signals between experimental runs, makes the use of multiple datasets challenging.
We describe subject-level brain functional connectivity structure as a multivariate Gaussian process and introduce a new strategy to estimate it from group data,
by imposing a common structure on the graphical model in the population. We
show that individual models learned from functional Magnetic Resonance Imaging (fMRI) data using this population prior generalize better to unseen data than
models based on alternative regularization schemes. To our knowledge, this is
the first report of a cross-validated model of spontaneous brain activity. Finally,
we use the estimated graphical model to explore the large-scale characteristics of
functional architecture and show for the first time that known cognitive networks
appear as the integrated communities of functional connectivity graph.
1
Introduction
The study of brain functional connectivity, as revealed through distant correlations in the signals
measured by functional Magnetic Resonance Imaging (fMRI), represents an easily accessible, albeit
indirect marker of brain functional architecture; in the recent years, it has given rise to fundamental insights on brain organization by representing it as a modular graph with large functionallyspecialized networks [1, 2, 3].
Among other features, the concept of functionally-specialized cognitive network has emerged as one
of the leading views in current neuroscientific studies: regions that activate simultaneously, spon?
Funding from INRIA-INSERM collaboration and grant /ANR/-08-BLAN-0250-02 VIMAGINE
1
taneously or as an evoked response, form an integrated network that supports a specific cognitive
function [1, 3]. In parallel, graph-based statistical analysis have shown that the graphical models
that naturally represent the correlation structure of brain signals exhibit small-world properties: any
two regions of the brain can be connected through few intermediate steps, despite the fact that most
nodes maintain only a few direct connections [4, 2]. These experimental results are consistent with
the view that the local neuronal systems in the brain group together to form large-scale distributed
networks [5]. However, the link between large-scale networks corresponding to a known cognitive
function and segregation into functional connectivity subgraphs has never been established.
At the individual level, the different brain functional networks are attractive as their coherence, as
manifested in their correlation structure, appears impacted by brain pathologies, such as schizophrenia [6], neurodegenerative diseases ?e.g. Alzheimer?s disease?[7, 8], or in the study of brain lesions
[9]. From the clinical standpoint, there is a strong interest in spontaneous-activity data to study and
diagnose brain pathologies because they can be recorded even on severely impaired subjects [10].
FMRI is the tool of choice to study large-scale functional connectivity, as it relies on wide expertise gained through decades of brain mapping, and MRI scanners are widely available in brain
research institutes and hospitals. However neural activity is observed in fMRI indirectly, at a limited
spatiotemporal resolution ((3mm)3 ? 3s typically), and is confounded by measurement and physiological noise (cardiac and respiratory cycles, motion). For clinical applications as well as inference
of brain fundamental architecture, the quantitative characterization of spontaneous activity has to
rely on a probabilistic model of the signal. The question of the robustness of covariance estimation
procedures to observation noise as well as inter-individual variability is thus fundamental, and has
not been addressed so far.
The focus of this work is the estimation of a large-scale Gaussian model to give a probabilistic
description of brain functional signals. The difficulties are two-fold: on the one hand, there is a
shortage of data to learn a good covariance model from an individual subject, and on the other
hand, subject-to-subject variability poses a serious challenge to the use of multi-subject data: this
concerns the creation of population-level connectivity templates, the estimation of the normal variability around this template, and the assessment of non-normal variability. In this paper, we provide
evidence that optimal regularization schemes can be used in the covariance estimation problem,
making it possible to pull data from several subjects. We show that the resulting covariance model
yields easily interpretable structures, and in particular we provide the first experimental evidence that
the functionally integrated communities of brain connectivity graphs correspond to known cognitive
networks. To our knowledge, this is the first experiment that assesses quantitatively the goodness
of fit of a full-brain functional connectivity model to new data. For this purpose, we introduce an
unbiased cross-validation scheme that tests the generalization power of the inferred model.
Although the proposed framework shares with so-called effective connectivity models (SEM [11],
DCM [12]) the formulation in terms of graphical model, it is fundamentally different in that these
approaches are designed to test the coefficients of (small) graphical models in a hypothesis-driven
framework, while our approach addresses the construction of large-scale model of brain connectivity
that might be valid at the population level, and is completely data-driven. [13] have applied with
success a similar framework to modeling task-driven brain activity.
The layout of the paper is the following. We first formulate the problem of estimating a highdimensional Gaussian graphical model from multi-subject data. Second, we detail how we extract
activity time-series for various brain regions from fMRI data. Then, we compare the generalization
performance of different estimators based on various regularization procedures. Finally, we study
the graph communities of the learnt connectivity model as well as the integration and segregation
processes between these communities. The present work opens the way to a systematic use of
Gaussian graphical Models for the analysis of functional connectivity data.
2
Theoretical background: estimating Gaussian graphical models
From a statistical estimation standpoint, the challenge to address is to estimate a covariance or a
correlation matrix giving a good description of the brain activation data. We choose to use the
framework of Gaussian models as these are the processes with the minimum information ?i.e. the
maximum entropy? given a covariance matrix.
2
Covariance selection procedures Let us consider a dataset X ? Rn?p with p variables and n
samples, modeled as centered multivariate Gaussian process. Estimating its covariance matrix is a
difficult statistical problem for two reasons. First, to specify a valid multivariate Gaussian model,
this covariance has to be positive definite. Second, if n < 21 p(p + 1), as this is the case in our
problem, the number of unknown parameters is greater than the number of samples. As a result,
the eigenstructure of the sample covariance matrix carries a large estimation error. To overcome
these challenges, Dempster [14] proposed covariance selection: learning or setting conditional independence between variables improves the conditioning of the problem. In multivariate Gaussian
models, conditional independence between variables is given by the zeros in the precision (inverse
covariance) matrix K. Covariance selection can thus be achieved by imposing a sparse support for
the estimated precision matrix, i.e., a small number of non-zero coefficients. In terms of graphical
models, this procedure amounts to limiting the number of edges.
Selecting the non-zero coefficients to optimize the likelihood of the model given the data is a difficult
combinatorial optimization problem. It is NP hard in the number of edges. In order to tackle
this problem with more than tens of variables, it can be relaxed into a convex problem using a
penalization based on the `1 norm of the precision matrix, that is known to promote sparsity on the
estimates [15]. The optimization problem is given by:
? ` = argmin
?
K
1
K0 tr (K ?sample ) ? log det K + ?kKk1 ,
(1)
? sample = 1 XT X is the sample covariance matrix, and k ? k1 is the element-wise `1 norm
where ?
n
of the off-diagonal coefficients
in the matrix. Optimal solutions to this problem can be computed
very efficiently in O p3 time [15, 16, 17]. Note that this formulation of the problem amounts to
the computation of a maximum a posteriori (MAP) with an i.i.d. Laplace prior on the off-diagonal
coefficients of the precision matrix.
Imposing a common sparsity structure In the application targeted by this contribution, the problem is to estimate the precision matrices in a group of subjects among which one can assume that all
the individual precision matrices share the same structure of conditional independence, i.e., the zeros
in the different precision matrices should be at the same positions. This amounts to a joint prior that
can also lead to the computation of a MAP. To achieve the estimation with the latter constraint, a natural solution consists in estimating all matrices jointly. Following the idea of joint feature selection
using the group-Lasso for regression problems [18], the solution we propose consists in penalizing
precisions using a mixed norm `21 . Let us denote q
K(s) the precision for subject s in a population
P
PS
P
(s) 2
(?)
of S subjects. The penalty can be written as i6=j
s=1 (Kij ) =
i6=j kKij k2 . This leads to
the minimization problem:
?
?
S
X
X
(?)
? (s) ) ? log det K(s) + ?
? (s)
tr(K(s) ?
K
= argminK(s) 0 ?
kKij k2 ? (2)
sample
`21
s=1..S
s=1
i6=j
One can notice then that in the special case where S = 1, (2) is equivalent to (1). By using such a
? (s) , s = 1, . . . , S} are either jointly set to zero or are jointly
penalization, a group of coefficients {K
ij
non-zero [18], thus one enforces the precisions matrices to have a common sparse support for all
subjects.
To our knowledge, two other recent contributions address the problem of jointly estimating multiple
graphical models [19, 20]. While the approach of [19] is different from (2) and does not correspond
to a group-Lasso formulation, [20] mentions the problem (2). Compared to this prior work, the
optimization strategy we introduce largely differs, but also the application and the validation settings.
Indeed, we are not interested in detecting the presence or the absence of edges on a common graph,
but in improving the estimation of a probabilistic model of the individual data. Also, the procedure
to set regularization parameter ? is done by evaluating the likelihood of unseen data in a principled
nested cross-validation setting.
In order to minimize (2), we modified the SPICE algorithm [21] that consists in upper bounding the
non-differentiable absolute values appearing in the `1 norm with a quadratic differentiable function.
When using a group-Lasso penalty, similarly the non-differentiable `2 norms appearing in the `21
penalty can be upper bounded. The computational complexity of an iteration that updates all coefficients once is now in O S p3 : it scales linearly with the number of models to estimate. Following
3
the derivation from [16], the iterative optimization procedure is stopped using a condition on the
optimality of the solution using a control on the duality gap. Global optimality of the estimated
solution is made possible by the convexity of the problem (2).
Alternatively, a penalization based on a squared `2 norm has been investigated. It consists in regularizing the estimate of the precision matrix by adding a diagonal matrix to the sample covariance
before computing its inverse. It amounts to an `2 shrinkage by penalizing uniformly off-diagonal
terms:
? ` = (?
? sample + ? I)?1
K
(3)
2
Although the penalization parameter ? for this shrinkage can be chosen by cross-validation, Ledoit
and Wolf [22] have introduced a closed formula that leads to a good choice in practice. Unlike `1
penalization, `2 downplays uniformly connections between variables, and is thus of less interest for
the study of brain structure. It is presented mainly for comparison purposes.
3
Probing brain functional covariance with fMRI
Inter-individual variability of resting-state fMRI We are interested in modeling spontaneous
brain activity, also called resting state data, recorded with fMRI. Although such data require complex
strategies to provide quantitative information on brain function, they are known to reveal intrinsic
features of brain functional anatomy, such as cognitive networks [1, 23, 3] or connectivity topology
[4, 2].
A well-known challenge with brain imaging data is that no two brains are alike. Anatomical correspondence between subjects is usually achieved by estimating and applying a deformation field that
maps the different anatomies to a common template. In addition to anatomical variability, within
a population of subjects, cognitive networks may recruit slightly different regions. Our estimation strategy is based on the hypothesis that although the strength of correlation between connected
brain region may vary across subjects, many of the conditional independence relationship will be
preserved, as they reflect the structural wiring.
The data at hand: multi-subject brain activation time series 20 healthy subjects were scanned
twice in a resting task, eyes closed, resulting in a set of 244 brain volumes per session acquired with
a repetition time of 2.4 s. As in [8], after standard neuroimaging pre-processing, we extract brain
fMRI time series and average them based on an atlas that subdivides the gray matter tissues into
standard regions.
We have found that the choice of the atlas used to extract time-series is crucial. Depending on
whether the atlas oversegments brain lobes into regions smaller than subject-to-subject anatomical
variability or captures this variability, cross-validation scores vary significantly. Unlike previous
studies [4, 8], we choose to rely on an inter-subject probabilistic atlas of anatomical structures. For
cortical structures, we use the prior probability of cortical folds in template space1 used in Bayesian
sulci labeling and normalization of the cortical surface [24]. This atlas covers 122 landmarks spread
throughout the whole cortex and matches naturally their anatomical variability in terms of position,
shape, and spread. It has been shown to be a good support to define regions of interest for fMRI
studies [25]. For sub-cortical structures, such as gray nuclei, we use the Harvard-Oxford sub-cortical
probabilistic atlas, as shipped by the FSL software package. The union of both atlases forms an
inter-subject probabilistic atlas for 137 anatomically-defined regions.
As we are interested in modeling only gray-matter correlations, we regress out confound effects obtained by extracting signals in different white matter and cortico-spinal fluid (CSF) regions, as well
as the rigid-body motion time courses estimated during data pre-processing. We use the SPM software to derive voxel-level tissue probability of gray matter, white matter, and CSF from the anatomical images of each subject. Tissue-specific time series for either confound signals or grey-matter
signals are obtained by multiplying the subject-specific tissue probability maps with the probabilistic
atlas.
Finally, as the fMRI signals contributing to functional connectivity have been found to lie in frequencies below 0.1 Hz [26], we apply temporal low-pass filtering to the extracted time series. We set the
1
The corresponding atlas can be downloaded on http://lnao.lixium.fr/spip.php?article=229
4
cut-off frequency of the filter using cross-validation with the Ledoit-Wolf `2 -shrinkage estimator.
We find an optimal choice of 0.3 Hz. Also, we remove residual linear trends due to instrument bias
or residual movement signal and normalize the variance of the resulting time series. The covariance
matrices that we study thus correspond to correlations.
4
Learning a better model for a subject?s spontaneous activity
Model-selection settings Given a subject?s resting-state fMRI dataset, our goal is to estimate the
best multivariate normal model describing this subject?s functional connectivity. For this, we learn
the model using the data from one session, and measure the likelihood of the second session?s data
from the same subject. We use this two-fold cross-validation procedure to tune the regularization
parameters. In addition, we can use the data of the remaining subjects as a reference population
during the training procedure to inform the model for the singled-out subject.
Generalization performance for different estimation strategies We compare different estimation strategies. First, we learn the model using only the subject?s data. We compare the sample
correlation matrix, as well as the Ledoit-Wolf, `2 and `1 -penalized estimators. Second, we use the
combined data of the subject?s training session as well as the population, using the same estimators: we concatenate the data of the population and of the train session to estimate the covariance.
Finally, we use the `21 -penalized estimator in Eq.(2), to learn different precisions for each subject,
with a common sparse structure. As this estimation strategy yields a different correlation matrix for
each subject, we use the precision corresponding to the singled-out subject to test ?i.e. compute the
Gaussian log-likelihood of? the data of the left out session.
The cross-validation results (averaged across 20 subjects) are reported in Table 1. In addition, an
example of estimated precision matrices can be seen in Figure 1. We find that, due to the insufficient
number of samples in one session, the subject?s sample precision matrix performs poorly. `2 penalization gives a good conditioning and better performances, but is outperformed by `1 penalized
estimator that yields a sparsity structure expressing conditional independences between regions. On
the other hand, the population?s sample precision is well-conditioned due to the high number of
samples at the group level and generalizes much better than the subject-level sample precision or the
corresponding `2 -penalized estimate. Penalizing the population-level covariance matrix does not
give a significant performance gain. In particular, the `1 -penalized subject-level precision matrix
outperforms the precision matrices learned from the group (p < 10?5 ).
We conclude from these cross-validation results that the generalization power of the models estimated from the population data are not limited by the number of samples but because they do not
reflect the subject?s singularities. On the other hand, the estimation of a model solely from the
subject?s data is limited by estimation error. We find that the `21 -penalized estimator strikes a compromise and generalizes significantly better than the other approaches (p < 10?10 ). Although each
individual dataset is different and generalization scores vary from subject to subject, compared to
the second-best performing estimator the `21 -penalized estimator gives a net gain for each subject
of at least 1.7 in the likelihood of unseen data.
Graphs estimated As can be seen from Figure 1, precision matrices corresponding to models that
do not generalize well display a lot of background noise whereas in models that generalize well,
a sparse structure stands out. Although an `1 penalization is sparsity inducing, the optimal graphs
estimated with such estimators are not very sparse (see table 1): a filling factor of 50% amounts
to 5 000 edges. As a result, the corresponding graphs are not interpretable without thresholding
Generalization likelihood
Filling factor
Number of communities
Modularity
MLE
33.1
100%
6
.07
Using subject data
LW
`2
`1
-57.1
38.8
43.0
100% 100% 45%
5
5
9
.07
.12
.25
Uniform group model
MLE
LW
`2
`1
40.6
41.5
41.6
41.8
100% 100% 100% 60%
9
8
7
9
.23
.23
.18
.32
`21
45.6
8%
16
.60
Table 1: Summary statistics for different estimation strategies. MLE is the Maximum Likelihood
Estimate, in other words, the sample precision matrix. LW is the Ledoit-Wolf estimate.
5
(corresponding visualization are given in the supplementary materials). To interpret dense brain
connectivity graphs, previous work relied on extracting a connectivity backbone using a maximal
spanning tree [27], or graph statistics on thresholded adjacency matrices [2].
On the opposite, the `21 -penalized graph is very sparse, with only 700 edges. Adequate penalization
serves as a replacement to backbone extraction; moreover it corresponds to a theoretically wellgrounded and accurate model of brain connectivity. After embedding in 3D anatomical space, the
estimated graph is very symmetric (see Figure 2). A third of the weight on the edges is on connections between a region and the corresponding one on the opposite hemisphere. In addition, the
connectivity model displays strong fronto-parietal connections, while the visual system is globally
singled out into one cluster, connected to the rest of the cortex mostly via the middle-temporal area.
5
An application: graph communities to describe functional networks
Even very sparse, high-dimensional functional connectivity graphs are hard to interpret. However,
they are deemed of high neuroscientific interest, as their structure can reflect fundamental nervous
system assembly principles. Indeed, there is evidence from the study of the fault-resilient structure
of anatomical connections in the nervous systems that ensembles of neurones cluster together to
form communities that are specialized to a cognitive task [5, 4, 27]. This process, known as functional integration goes along with a reduction of between-community connections, called segregation. So far, studies of full-brain connectivity graphs have focused on the analysis of their statistical
properties, namely their small-world characteristics related to the emergence of strongly-connected
communities in neural system. These properties can be summarized by a measure called modularity [4, 2, 28]. As the original measures introduced for integration and segregation are Gaussian
entropy and mutual information measures [29, 30], the estimation of a well-conditioned Gaussian
graphical model of the functional signal gives us an adequate tool to study large-scale modularity
and integration in the brain. A limitation of the studies of statistical properties on graphs estimated
from the data is that they may reflect properties of the estimation noise. Given that our graphical
description generalizes well to unseen data, it should reflect the intrinsic properties of brain functional connectivity better than the sample correlation matrices previously used [4]. In this section,
we study these properties on the optimal precision matrices describing a representative individual as
estimated above.
Finding communities to maximize modularity Graph communities are a concept originally
introduced in social networks: communities are groups of densely-connected nodes with little
between-group connections. Newman and Girvan [28] have introduced an objective function Q,
called modularity, to measure the quality of a graph partition in a community structure. Choosing
the partition to optimize modularity is a NP-hard problem, but Smyth and White formulate it as a
graph partitioning problem, and give an algorithm [31] based on a convex approximation leading to
spectral embedding and k-means clustering. The number of classes is chosen to optimize modularity.
Brain functional-connectivity communities We apply Smyth and White?s algorithm on the brain
connectivity graphs. We find that using the `21 -penalized precision matrices yields a higher number
of communities, and higher modularity values (Table 1) then the other estimation strategies. We discuss in details the results obtained without regularization, and with the best performing regularization strategies: `1 penalization on individual data, and `21 penalization. The communities extracted
from the sample precision matrix are mostly spread throughout the brain, while the graph estimated
with `1 penalization on individual data yields communities centered on anatomo-functional regions
such as the visual system (figures in supplementary materials). The communities extracted on the
`21 -penalized precision exhibit finer anatomo-functional structures, but also extract some known
functional networks that are commonly found while studying spontaneous as well as task-related
activity [3]. In Figure 2, we display the resulting communities, making use, when possible, of the
same denominations as the functional networks described in [3]. In particular, the default mode network and the fronto-parietal network are structures reproducibly found in functional-connectivity
studies that are non-trivial as they are large-scale, long-distance, and not comprised solely of bilateral regions.
6
Subject sample
precision
40
30
20
100
10
20
30
40
Subject precision l1
4.5
3.0
1.5
0.0
1.5
3.0
4.5
Group sample
precision
6.0
4.5
3.0
1.5
0.0
1.5
3.0
4.5
6.0
Group precision l1
4.5
3.0
1.5
0.0
1.5
3.0
4.5
Group precision l21
1.6
1.2
0.8
0.4
0.0
0.4
0.8
1.2
1.6
Figure 1: Precision matrices computed with different estimators. The precision matrix is shown in
false colors in the background and its support is shown in black and white in an inset.
Full graph
Communities
Medial visual
Occipital pole visual
Lateral visual
Left and right
fronto-parietal
Default mode
Dorsal motor
Auditory
Ventral motor
Pars opercularis
(Broca aera)
Fronto-lateral
Posterior inferior
temporal 1
Posterior inferior
temporal 2
Basal ganglia
Right Thalamus
Left Putamen
Cingulo-insular
network
Figure 2: Functional-connectivity graph computed by `21 -penalized estimation and corresponding
communities. The graph displayed on the left is not thresholded, but on the top view, connections
linking one region to its corresponding one on the opposite hemisphere are not displayed.
`1
`21
Figure 3: Between-communities integration graph obtained through `1 - (left) and `21 -penalization
(right). The size of the nodes represents integration within a community and the size of the edges
represents mutual information between communities. Region order is chosen via 1D Laplace embedding. The regions comprising the communities for the `1 -penalized graph are detailed in the
supplementary materials.
7
Integration and segregation in the graph communities These functionally-specialized networks
are thought to be the expression of integration and segregation processes in the brain circuits architecture. We apply the measures introduced by Tononi et al. [29] on the estimated graphs to quantify
this integration and segregation, namely Gaussian entropy of the functional networks, and mutual
information. However, following [32], we use conditional integration and conditional mutual information to obtain conditional pair-wise measures, and thus a sparser graph: for two sets of nodes S1
and S2 ,
1
Integration:
IS1
= log det(KS1 )
(4)
2
Mutual information: MS1 ,S2 = IS1 ?S2 ? IS1 ? IS2 ,
(5)
where KS1 denotes the precision matrix restricted to the nodes in S1 . We use these two measures,
pair-wise and within-community, to create a graph between communities.
This graph reflects the large-scale brain function organization. We compare the graph built using the
`1 and `21 -penalized precisions (figure 3). We find that the former is much sparser than the latter,
reflecting a higher large segregation in between the communities estimated. The graph corresponding to the `21 penalization segments the brain in smaller communities and care must be taken in
comparing the relative integration of the different systems: for instance the visual system appears as
more integrate on the `1 graph, but this is because it is split in three on the `21 graph.
Although this graph is a very simplified view of brain functional architecture at rest, it displays
some of the key processing streams: starting from the primary visual system (medial visual areas),
we can distinguish the dorsal visual pathway, going through the occipital pole to the intra-parietal
areas comprised in the default mode network and the fronto-parietal networks, as well as the ventral
visual pathway, going through the lateral visual areas to the inferior temporal lobe. The default
mode and the fronto-parietal networks appear as hubs, connecting different networks with different
functions, such as the visual streams, but also the motor areas, as well as the frontal regions.
6
Conclusion
We have presented a strategy to overcome the challenge of subject-to-subject variability and learn
a detailed model of an individual?s full-brain functional connectivity using population data. The
learnt graphical model is sparse and reveals the interaction structure between functional modules
via conditional independence relationships that generalize to new data. As far as we can tell, this is
the first time an unsupervised model of brain functional connectivity is backed by cross-validation.
Also, from a machine learning perspective, this work is the first demonstration, to our knowledge,
of joint estimation of multiple graphical models in a model-selection setting, and the first time it is
shown to improve a prediction score for individual graphical models.
From a neuroscience perspective, learning high-dimensional functional connectivity probabilistic
models opens the door to new studies of brain architecture. In particular, the models estimated with
our strategy are well suited to exploring the graph-community structure resulting from the functional integration, specialization, and segregation of distributed networks. Our preliminary work
suggests that a mesoscopic description of neural ensembles via high-dimensional graphical models
can establish the link between the functional networks observed in brain imaging and the fundamental nervous-system assembly principles. Finally, subject-level Gaussian probabilistic models of
functional connectivity between a few regions have proved useful for statistically-controlled interindividual comparisons on resting-state, with medical applications [9]. Extending such studies to
full-brain analysis, that have been so-far limited by the amount of data available on individual subjects, clears the way to new insights in brain pathologies [6, 8].
References
[1] M. Fox and M. Raichle: Spontaneous fluctuations in brain activity observed with functional magnetic
resonance imaging. Nat Rev Neurosci 8 (2007) 700?711
[2] E. Bullmore and O. Sporns: Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci 10 (2009) 186?198
[3] S. Smith, et al. : Correspondence of the brain?s functional architecture during activation and rest. PNAS
106 (2009) 13040
8
[4] S. Achard, et al. : A resilient, low-frequency, small-world human brain functional network with highly
connected association cortical hubs. J Neurosci 26 (2006) 63
[5] O. Sporns, et al. : Organization, development and function of complex brain networks. Trends in Cognitive Sciences 8 (2004) 418?425
[6] G. Cecchi, et al. : Discriminative network models of schizophrenia. In: NIPS 22. (2009) 250?262
[7] W. Seeley, et al. : Neurodegenerative Diseases Target Large-Scale Human Brain Networks. Neuron 62
(2009) 42?52
[8] S. Huang, et al. : Learning brain connectivity of Alzheimer?s disease from neuroimaging data. In:
Advances in Neural Information Processing Systems 22. (2009) 808?816
[9] G. Varoquaux, et al. : Detection of brain functional-connectivity difference in post-stroke patients using
group-level covariance modeling. In: IEEE MICCAI. (2010)
[10] M. Greicius: Resting-state functional connectivity in neuropsychiatric disorders. Current opinion in
neurology 21 (2008) 424
[11] A. McLntosh and F. Gonzalez-Lima: Structural equation modeling and its application to network analysis
in functional brain imaging. Human Brain Mapping 2(1) (1994) 2?22
[12] J. Daunizeau, K. Friston, and S. Kiebel: Variational Bayesian identification and prediction of stochastic
nonlinear dynamic causal models. Physica D 238 (2009)
[13] J. Honorio and D. Samaras: Multi-Task Learning of Gaussian Graphical Models. In: ICML. (2010)
[14] A. Dempster: Covariance selection. Biometrics 28(1) (1972) 157?175
[15] O. Banerjee, et al. : Convex optimization techniques for fitting sparse Gaussian graphical models. In:
ICML. (2006) 96
[16] J. Duchi, S. Gould, and D. Koller: Projected subgradient methods for learning sparse gaussians. In: Proc.
of the Conf. on Uncertainty in AI. (2008)
[17] J. Friedman, T. Hastie, and R. Tibshirani: Sparse inverse covariance estimation with the graphical lasso.
Biostatistics 9(3) (2008) 432?441
[18] M. Yuan and Y. Lin: Model selection and estimation in regression with grouped variables. Journal-Royal
Statistical Society Series B Statistical Methodology 68(1) (2006) 49
[19] J. Guo, et al. : Joint estimation of multiple graphical models. Preprint (2009)
[20] J. Chiquet, Y. Grandvalet, and C. Ambroise: Inferring multiple graphical structures. Stat and Comput
(2010)
[21] A. Rothman, et al. : Sparse permutation invariant covariance estimation. Electron J Stat 2 (2008) 494
[22] O. Ledoit and M. Wolf: A well-conditioned estimator for large-dimensional covariance matrices. J.
Multivar. Anal. 88 (2004) 365?411
[23] C. F. Beckmann and S. M. Smith: Probabilistic independent component analysis for functional magnetic
resonance imaging. Trans Med Im 23(2) (2004) 137?152
[24] M. Perrot, et al. : Joint Bayesian Cortical Sulci Recognition and Spatial Normalization. In: IPMI. (2009)
[25] M. Keller, et al. : Anatomically Informed Bayesian Model Selection for fMRI Group Data Analysis. In:
MICCAI. (2009)
[26] D. Cordes, et al. : Mapping functionally related regions of brain with functional connectivity MR imaging.
American Journal of Neuroradiology 21(9) (2000) 1636?1644
[27] P. Hagmann, et al. : Mapping the structural core of human cerebral cortex. PLoS Biol 6(7) (2008) e159
[28] M. Newman and M. Girvan: Finding and evaluating community structure in networks. Phys rev E 69
(2004) 26113
[29] G. Tononi, O. Sporns, and G. Edelman: A measure for brain complexity: relating functional segregation
and integration in the nervous system. PNAS 91 (1994) 5033
[30] O. Sporns, G. Tononi, and G. Edelman: Theoretical neuroanatomy: relating anatomical and functional
connectivity in graphs and cortical connection matrices. Cereb Cortex 10 (2000) 127
[31] S. White and P. Smyth: A spectral clustering approach to finding communities in graphs. In: 5th SIAM
international conference on data mining. (2005) 274
[32] D. Coynel, et al. : Conditional integration as a way of measuring mediated interactions between largescale brain networks in functional MRI. In: Proc. ISBI. (2010)
9
| 4080 |@word middle:1 mri:2 norm:6 open:2 grey:1 lobe:2 covariance:25 mention:1 tr:2 carry:2 reduction:1 series:8 score:3 selecting:1 outperforms:1 current:2 comparing:1 activation:3 written:1 must:1 kiebel:1 distant:1 concatenate:1 partition:2 shape:1 motor:3 remove:1 reproducible:1 interpretable:2 designed:1 update:1 atlas:10 medial:2 nervous:4 smith:2 core:1 characterization:1 detecting:1 node:5 opercularis:1 org:1 along:1 direct:1 yuan:1 consists:4 edelman:2 pathway:2 fitting:1 introduce:3 theoretically:1 acquired:1 inter:4 indeed:2 multi:4 brain:72 bertrand:2 globally:1 little:1 curse:1 estimating:6 bounded:1 moreover:1 circuit:1 biostatistics:1 backbone:2 argmin:1 recruit:1 informed:1 finding:3 temporal:5 quantitative:2 tackle:1 k2:2 control:1 partitioning:1 medical:1 grant:1 appear:2 eigenstructure:1 positive:1 before:1 local:1 severely:1 despite:1 oxford:1 insular:1 solely:2 fluctuation:1 inria:6 might:1 twice:1 black:1 evoked:1 suggests:1 challenging:1 limited:5 greicius:1 statistically:1 averaged:1 enforces:1 practice:1 union:1 definite:1 differs:1 procedure:8 area:5 neurodegenerative:2 significantly:2 thought:1 pre:2 word:1 fsl:1 ga:1 selection:10 applying:1 optimize:3 equivalent:1 map:4 backed:1 layout:1 go:1 occipital:2 starting:1 convex:3 focused:1 resolution:1 formulate:2 keller:1 disorder:1 subgraphs:1 insight:2 estimator:12 pull:1 population:14 embedding:3 ambroise:1 denomination:1 laplace:2 limiting:1 spontaneous:9 construction:1 target:1 lima:1 smyth:3 hypothesis:2 harvard:1 element:1 trend:2 recognition:1 seeley:1 cut:1 observed:4 module:1 preprint:1 capture:1 region:20 connected:6 cycle:1 plo:1 movement:1 disease:4 principled:1 dempster:2 convexity:1 complexity:2 ipmi:1 dynamic:1 segment:1 compromise:1 creation:1 samara:1 subdivides:1 completely:1 easily:2 joint:5 indirect:1 various:2 derivation:1 train:1 describe:2 activate:1 effective:1 labeling:1 newman:2 tell:1 choosing:1 jean:1 modular:1 emerged:1 widely:1 supplementary:3 anr:1 bullmore:1 statistic:2 unseen:4 jointly:4 noisy:1 ledoit:5 singled:3 emergence:1 differentiable:3 net:1 propose:1 interaction:2 maximal:1 fr:4 date:1 argmink:1 poorly:1 achieve:1 description:4 inducing:1 normalize:1 impaired:1 p:1 cluster:2 extending:1 normalesup:1 depending:1 derive:1 stat:2 pose:1 measured:1 ij:1 eq:1 strong:2 quantify:1 anatomy:2 csf:2 filter:1 stochastic:1 centered:2 human:4 opinion:1 material:3 adjacency:1 require:1 resilient:2 generalization:6 preliminary:1 varoquaux:3 rothman:1 singularity:1 im:1 exploring:1 physica:1 scanner:1 mm:1 around:1 normal:3 mapping:4 electron:1 ventral:2 vary:3 purpose:2 estimation:25 proc:2 outperformed:1 combinatorial:1 healthy:1 grouped:1 repetition:1 create:1 tool:2 reflects:2 minimization:1 gaussian:16 modified:1 putamen:1 shrinkage:3 dsv:1 blan:1 validated:1 focus:1 likelihood:7 mainly:1 posteriori:1 inference:1 el:1 rigid:1 integrated:3 typically:1 honorio:1 koller:1 raichle:1 going:2 france:4 interested:3 comprising:1 among:2 development:1 resonance:4 spatial:1 gramfort:2 integration:15 special:1 mutual:5 field:1 once:1 never:1 extraction:1 represents:3 unsupervised:1 filling:2 icml:2 promote:1 fmri:13 report:1 np:2 quantitatively:1 serious:1 few:3 fundamentally:1 modern:1 simultaneously:1 densely:1 individual:15 replacement:1 maintain:1 friedman:1 detection:1 organization:3 interest:4 mining:1 highly:1 tononi:3 intra:1 accurate:1 edge:7 shipped:1 fox:1 tree:1 biometrics:1 causal:1 deformation:1 chiquet:1 theoretical:3 fronto:6 stopped:1 kij:1 instance:1 modeling:6 cover:1 goodness:1 measuring:1 pole:2 uniform:1 comprised:2 reported:1 spatiotemporal:1 learnt:2 combined:1 fundamental:5 siam:1 international:1 accessible:1 probabilistic:11 systematic:1 off:4 together:2 connecting:1 connectivity:38 squared:1 reflect:5 recorded:2 choose:2 huang:1 cognitive:9 conf:1 american:1 leading:2 summarized:1 coefficient:7 matter:6 hagmann:1 stream:2 bilateral:1 view:5 lot:1 diagnose:1 closed:2 relied:1 parallel:1 neuropsychiatric:1 contribution:2 ass:1 minimize:1 php:1 variance:1 characteristic:2 efficiently:1 largely:1 yield:5 correspond:3 ensemble:2 generalize:5 bayesian:4 identification:1 multiplying:1 expertise:1 mesoscopic:1 finer:1 tissue:4 stroke:1 l21:1 inform:1 phys:1 frequency:3 regress:1 naturally:2 gain:2 auditory:1 dataset:3 proved:1 knowledge:4 color:1 dimensionality:1 improves:1 reflecting:1 appears:2 alexandre:2 originally:1 higher:3 methodology:1 response:1 impacted:1 specify:1 formulation:3 done:1 strongly:1 anatomo:2 miccai:2 correlation:10 hand:5 nonlinear:1 banerjee:1 marker:2 assessment:1 spm:1 mode:4 quality:1 reveal:1 gray:4 effect:1 concept:2 unbiased:1 former:1 regularization:7 symmetric:1 white:6 attractive:1 wiring:1 during:3 inferior:3 cereb:1 performs:1 motion:2 l1:2 duchi:1 image:1 wise:3 variational:1 funding:1 common:6 specialized:3 functional:53 spinal:1 conditioning:2 volume:1 cerebral:1 linking:1 association:1 resting:6 functionally:4 interpret:2 relating:2 measurement:1 expressing:1 significant:1 imposing:3 ai:1 i6:3 similarly:1 session:7 pathology:4 entail:1 cortex:4 surface:1 multivariate:5 posterior:2 recent:2 perspective:2 hemisphere:2 driven:3 manifested:1 success:1 fault:1 seen:2 minimum:1 greater:1 relaxed:1 care:1 mr:1 broca:1 neuroanatomy:1 maximize:1 strike:1 signal:11 ii:1 full:7 multiple:5 pnas:2 thalamus:1 match:1 multivar:1 clinical:2 cross:10 long:1 lin:1 post:1 baptiste:1 schizophrenia:2 mle:3 controlled:1 prediction:2 regression:2 patient:1 iteration:1 represent:1 normalization:2 achieved:2 preserved:1 background:3 addition:4 whereas:1 addressed:1 standpoint:2 crucial:1 daunizeau:1 rest:3 unlike:2 subject:51 hz:2 med:1 alzheimer:2 structural:4 extracting:2 presence:1 door:1 revealed:1 intermediate:1 split:1 ms1:1 independence:6 fit:1 architecture:8 lasso:4 topology:1 opposite:3 hastie:1 idea:1 det:3 whether:1 expression:1 specialization:1 cecchi:1 penalty:3 neurones:1 adequate:2 useful:1 gael:1 detailed:2 shortage:1 tune:1 clear:1 amount:6 ten:1 http:1 spice:1 notice:1 neuroscience:2 estimated:15 per:1 tibshirani:1 anatomical:9 express:1 group:17 basal:1 key:1 sulcus:2 penalizing:3 thresholded:2 imaging:8 graph:42 subgradient:1 year:1 run:1 inverse:3 package:1 uncertainty:1 throughout:2 p3:2 gonzalez:1 coherence:1 distinguish:1 display:5 correspondence:2 fold:3 quadratic:1 activity:13 strength:1 scanned:1 constraint:1 software:2 optimality:2 achard:1 performing:2 gould:1 neurospin:4 across:2 cardiac:1 slightly:1 smaller:2 rev:3 making:2 alike:1 s1:2 ks1:2 anatomically:2 restricted:1 confound:2 invariant:1 taken:1 segregation:10 visualization:1 previously:1 equation:1 describing:2 discus:1 thirion:2 instrument:1 serf:1 confounded:1 studying:1 available:3 generalizes:3 gaussians:1 apply:3 indirectly:1 spectral:2 magnetic:4 appearing:2 alternative:1 robustness:1 original:1 top:1 remaining:1 clustering:2 assembly:2 denotes:1 graphical:21 giving:1 k1:1 establish:1 society:1 objective:1 perrot:1 question:1 strategy:12 primary:1 diagonal:4 exhibit:2 distance:1 link:2 lateral:3 landmark:1 trivial:1 reason:1 spanning:1 modeled:1 relationship:2 insufficient:1 beckmann:1 demonstration:2 difficult:3 neuroimaging:3 mostly:2 rise:1 neuroscientific:2 fluid:1 anal:1 unknown:1 upper:2 observation:2 neuron:1 datasets:1 parietal:9 displayed:2 variability:12 rn:1 community:33 inferred:1 downplays:1 introduced:5 namely:2 pair:2 connection:9 coherent:1 learned:2 established:1 nip:1 trans:1 address:3 usually:1 below:1 sparsity:4 challenge:6 built:1 royal:1 sporns:4 power:2 difficulty:1 rely:2 natural:1 friston:1 largescale:1 residual:2 cea:5 representing:1 scheme:3 improve:1 eye:1 deemed:1 mediated:1 coupled:1 extract:4 prior:6 contributing:1 relative:1 girvan:2 par:1 permutation:1 mixed:1 limitation:1 filtering:1 isbi:1 validation:10 penalization:13 nucleus:1 downloaded:1 integrate:1 consistent:1 article:1 thresholding:1 principle:2 grandvalet:1 share:2 collaboration:1 aera:1 poline:1 course:1 penalized:13 summary:1 bias:1 cortico:1 institute:1 wide:1 template:4 face:1 absolute:1 sparse:12 distributed:2 overcome:2 default:4 cortical:8 world:3 valid:2 evaluating:2 stand:1 made:1 commonly:1 projected:1 simplified:1 far:4 voxel:1 social:1 inserm:1 global:1 reveals:1 conclude:1 discriminative:1 neurology:1 alternatively:1 iterative:1 decade:1 modularity:9 table:4 learn:6 sem:1 improving:1 investigated:1 complex:3 main:1 spread:3 linearly:1 dense:1 bounding:1 noise:4 whole:1 s2:3 neurosci:3 lesion:1 body:1 neuronal:1 respiratory:1 representative:1 probing:1 precision:34 sub:2 position:2 space1:1 is2:1 inferring:1 comput:1 lie:1 lw:3 third:1 formula:1 specific:3 xt:1 inset:1 hub:2 physiological:1 concern:1 evidence:3 intrinsic:2 albeit:1 adding:1 false:1 gained:1 nat:2 conditioned:3 gap:1 sparser:2 suited:1 entropy:3 explore:1 ganglion:1 visual:12 nested:1 wolf:5 corresponds:1 relies:1 extracted:3 dcm:1 conditional:10 goal:1 targeted:1 absence:1 hard:3 is1:3 uniformly:2 called:5 hospital:1 pas:1 duality:1 experimental:3 highdimensional:1 support:5 guo:1 latter:2 dorsal:2 frontal:1 regularizing:1 biol:1 kkk1:1 |
3,403 | 4,081 | Empirical Bernstein Inequalities for U-Statistics
Sandrine Anthoine
LATP, Aix-Marseille Universit?e, CNRS
39, rue F. Joliot Curie
F-13013 Marseille, France
[email protected]
Thomas Peel
LIF, Aix-Marseille Universit?e
39, rue F. Joliot Curie
F-13013 Marseille, France
[email protected]
Liva Ralaivola
LIF, Aix-Marseille Universit?e
39, rue F. Joliot Curie
F-13013 Marseille, France
[email protected]
Abstract
We present original empirical Bernstein inequalities for U-statistics with bounded
symmetric kernels q. They are expressed with respect to empirical estimates of
either the variance of q or the conditional variance that appears in the Bernsteintype inequality for U-statistics derived by Arcones [2]. Our result subsumes other
existing empirical Bernstein inequalities, as it reduces to them when U-statistics
of order 1 are considered. In addition, it is based on a rather direct argument using
two applications of the same (non-empirical) Bernstein inequality for U-statistics.
We discuss potential applications of our new inequalities, especially in the realm
of learning ranking/scoring functions. In the process, we exhibit an efficient procedure to compute the variance estimates for the special case of bipartite ranking
that rests on a sorting argument. We also argue that our results may provide test set
bounds and particularly interesting empirical racing algorithms for the problem of
online learning of scoring functions.
1
Introduction
The motivation of the present work lies in the growing interest of the machine learning community for learning tasks that are richer than now well-studied classification and regression. Among
those, we especially have in mind the task of ranking, where one is interested in learning a ranking
function capable of predicting an accurate ordering of objects according to some attached relevance
information. Tackling such problems generally implies the use of loss functions other than the 0-1
misclassification loss such as, for example, a misranking loss [6] or a surrogate thereof. For (x, y)
and (x0 , y 0 ) two pairs from some space Z := X ? Y (e.g., X = Rd and Y = R) the misranking loss
`rank and a surrogate convex loss `sur may be defined for a scoring function f ? Y X as:
`rank (f, (x, y), (x0 , y 0 )) := 1{(y?y0 )(f (x)?f (x0 ))<0} ,
sur
0
0
0
(1)
0
2
` (f, (x, y), (x , y )) := (1 ? (y ? y )(f (x) ? f (x )) .
(2)
Given such losses or, more generally, a loss ` : Y X ? Z ? Z ? R, and a training sample Z n =
{(Xi , Yi )}ni=1 of independent copies of some random variable Z := (X, Y ) distributed according
to D, the learning task is to derive a function f ? X Y such that the expected risk R` (f ) of f
R` (f ) := EZ,Z 0 ?D `(f, Z, Z 0 ) = EZ,Z 0 ?D `(f, (X, Y ), (X 0 , Y 0 ))
1
? ` (f, Z )
is as small as possible. In practice, this naturally brings up the empirical estimate R
n
X
1
? ` (f, Z ) :=
R
`(f, (Xi , Yi ), (Xj , Yj )),
n
n(n ? 1)
(3)
i6=j
which is a U-statistic [6, 10].
? ` (f, Z ) is related to R` (f ) and, more
An important question is to precisely characterize how R
n
? ` (f, Z )
specifically, one may want to derive an upper bound on R` (f ) that is expressed in terms of R
n
and other quantities such as a measure of the capacity of the class of functions f belongs to and the
size n of Z n ? in other words, we may talk about generalization bounds [4]. Pivotal tools to perform
such analysis are tail/concentration inequalities, which say how probable it is for a function of
several independent variables to deviate from its expectation; of course, the sharper the concentration
inequalities the more accurate the characterization of the relation between the empirical estimate and
its expectation. It is therefore of the utmost importance to have at hand tail inequalities that are sharp;
it is just as important that these inequalities rely as much as possible on empirical quantities.
Here, we propose new empirical Bernstein inequalities for U-statistics. As indicated by the name
(i) our results are Bernstein-type inequalities and therefore make use of information on the variance
of the variables under consideration, (ii) instead of resting on some assumed knowledge about this
variance, they only rely on empirical related quantities and (iii) they apply to U-statistics. Our new
inequalities generalize those of [3] and [13], which also feature points (i) and (ii) (but not (iii)),
while based on simple arguments. To the best of our knowledge, these are the first results that fulfill
(i), (ii) and (iii); they may give rise to a few applications, of which we describe two in the sequel.
The paper is organized as follows. Section 2 introduces the notations and briefly recalls the basics of
U-statistics as well as tail inequalities our results are based upon. Our empirical Bernstein inequalities are presented in Section 3; we also provide an efficient way of computing the empirical variance
when the U-statistics considered are based on the misranking loss `rank of (1). Section 4 discusses
two applications of our new results: test set bounds for bipartite ranking and online ranking.
2
2.1
Background
Notation
The following notation will hold from here on. Z is a random variable of distribution D taking
values in Z := X ? Y; Z 0 , Z1 , . . . , Zn are independent copies of Z and Z n := {Zi = (Xi , Yi )}ni=1
and Z p:q := {Zi }qi=p .
m
Am
n denotes the set An := {(i1 , . . . , im ) : 1 ? i1 6= . . . 6= im ? n} , with 0 ? m ? n.
Finally, a function q : Z m ? R is said to be symmetric if the value of q(z) = q(z1 , . . . , zm ) is
independent of the order of the zi ?s in z.
2.2
U-statistics and Tail Inequalities
?q (Z ) defined as
Definition 1 (U-statistic, Hoeffding [10]). The random variable U
n
X
1
?q (Z ) :=
U
q(Zi1 , . . . , Zim ),
n
|Am
n|
m
i?An
is a U-statistic of order m with kernel q, when q : Z m ? R is a measurable function on Z m .
?q (Z ); in addition, EZ U
?q (Z ) is a lowest
Remark 1. Obviously, EZ m q(Z1 , . . . , Zm ) = EZ n U
n
n
n
variance estimate of EZ m q(Z1 , . . . , Zm ) based on Z n [10]. Also, reusing some notation from the
? ` (f, Z ) of Eq. (3) is a U-statistic of order 2 with kernel qf (Z, Z 0 ) := `(f, Z, Z 0 ).
introduction, R
n
Remark 2. Two peculiarities of U-statistics that entail a special care are the following: (i) they are
sums of identically distributed but dependent variables: special tools need be resorted to in order
?q (Z ) from Eq, and (ii) from an
to deal with these dependencies to characterize the deviation of U
n
algorithmic point of view, their direct computations may be expensive, as it scales as O(nm ); in
Section 3, we show for the special case of bipartite ranking how this complexity can be reduced.
2
m=2
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
101
m=10
?Bernstein
?Hoeffding
102
103
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
m=2
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
101
?Bernstein
?Hoeffding
102
103
104
m=10
?Bernstein
?Hoeffding
102
103
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
101
?Bernstein
?Hoeffding
102
103
Figure 1: First two plots: values of the right-hand size of (5) and (6), for Duni and kernel qm for
m = 2 and m = 10 (see Example 1) as functions of n. Last two plots: same for DBer (0.15).
We now recall three tail inequalities (Eq. (5), (6), (7)) that hold for U-statistics with symmetric and
bounded kernels q. Normally, these inequalities make explicit use of the length qmax ? qmin of the
range [qmin , qmax ] of q. To simplify the reading, we will consider without loss of generality that q
has range [0, 1] (an easy way of retrieving the results for bounded q is to consider q/kqk? ).
One key quantity that appears in the original versions of tail inequalities (5) and (6) below is bn/mc,
the integer part of the ratio n/m ? this quantity might be thought of as the effective number of data.
To simplify the notation, we will assume that n is a multiple of m and, therefore, bn/mc = (n/m).
?q , [11].). Hoeffding proved the following:
Theorem 1 (First order tail inequality for U
n
o
?q (Z 0 ) ? U
?q (Z ) ? ? ? 2 exp ?(n/m)?2 ,
?? > 0, PZ n EZ 0n U
(4)
n
n
Hence ?? ? (0, 1], with probability at least 1 ? ? over the random draw of Z n :
s
2
1
0 ?
0
?q (Z ) ?
ln .
EZ n Uq (Z n ) ? U
n
(n/m) ?
(5)
To go from the tail inequality (4) to the bound version (5), it suffices to make use of the elementary
inequality reversal lemma (Lemma 1) provided in section 3, used also for the bounds given below.
?q , [2, 11]). Hoeffding [11] and, later, Arcones [2] refined
Theorem 2 (Bernstein Inequalities for U
the previous result in the form of Bernstein-type inequalities of the form
n
o
2
?q (Z 0 ) ? U
?q (Z ) ? ? ? a exp ? (n/m)?
?? > 0, PZ n EZ 0n U
,
n
n
2?q,m + bm ?
For Hoeffding, a = 2, ?q,m = ?2q where, ?2q is the variance of q(Z1 , . . . , Zm ) and bm = 2/3.
Hence, ?? ? (0, 1], with probability at least 1 ? ?:
s
2?2q
2
2
2
0 ?
0
?q (Z ) ?
ln +
ln .
(6)
EZ n Uq (Z n ) ? U
n
(n/m) ?
3(n/m) ?
For Arcones, a = 4, ?q,m = m?q2 where ?q2 is the variance of EZ2 ,...,Zm q(Z1 , Z2 , . . . , Zm ) (this is
a function of Z1 ) and bm = 2m+3 mm?1 + (2/3)m?2 . ?? ? (0, 1], with probability at least 1 ? ?:
s
2m?q2
4
bm
4
0 ?
0
?q (Z ) ?
ln +
ln .
(7)
EZ n Uq (Z n ) ? U
n
(n/m) ?
(n/m) ?
With a slight abuse, we will now refer to Eq. (5), (6) and (7) as tail inequalities. In essence, these
?q (Z ).
are confidence intervals at level 1 ? ? for EZ m q(Z m ) = EZ n U
n
Remark 3. Eq. (7) is based on the so-called Hoeffding decomposition of U-statistics [11]. It provides
a more accurate Bernstein-type inequality than that of Eq. (6), as m?q2 is known to be smaller than
?2q (see [16]). However, for moderate values of n/m (e.g. n/m < 105 ) and reasonable values of ?
(e.g. ? = 0.05), the influence of the log terms might be such that the advantage of (7) over (6) goes
unnoticed. Thus, we detail our results focusing on an empirical version of (6).
Example 1. To illustrate how the
of the variance information provides smaller confidence interQuse
m
vals, consider the kernel qm := i=1 zi and two distributions Duni and DBer (p). Duni is the uniform
distribution on [0, 1], for which ?2 = 31m ? 41m . DBer (p) is the Bernoulli distribution with parameter
p ? [0, 1], for which ?2 = pm (1 ? pm ). Figure 1 shows the behaviors of (6) and (5) for various
values of m as functions of n. Observe that the variance information renders the bound smaller.
3
3
Main Results
This section presents the main results of the paper. We first introduce the inequality reversal lemma,
which allows to transform tail inequalities into upper bounds (or confidence intervals), as in (5)-(7).
Lemma 1 (Inequality Reversal lemma). Let X be a random variable and a, b > 0, c, d ? 0 such that
b?2
?? > 0, PX (|X| ? ?) ? a exp ?
,
(8)
c + d?
then, with probability at least 1 ? ?
r
|X| ?
c a d a
ln + ln .
b ?
b ?
(9)
Proof. Solving for ? such that the right hand side of (8) is equal to ? gives:
r
1
a
a
a
?=
d ln + d2 ln2 + 4bc ln
.
2b
?
?
?
?
?
?
Using a + b ? a + b gives an upper bound on ? and provides the result.
3.1
Empirical Bernstein Inequalities
Let us now define the empirical variances we will use in our main result.
? 2 be the U-statistic of order 2m defined as:
Definition 2. Let ?
q
? 2q (Z ) :=
?
n
X
2
1
q(Zi1 , . . . , Zim ) ? q(Zim+1 , . . . , Zi2m ) ,
2m
|An |
2m
(10)
i?An
and ?
?q2 be the U-statistic of order 2m ? 1 defined as:
?
?q2 (Z n ) :=
1
|A2m?1
|
n
X
q(Zi1 , Zi2 , . . . , Zim )q(Zi1 , Zim+1 , . . . , Zi2m?1 ),
(11)
i?A2m?1
n
It is straightforward to see that (cf. the definitions of ?2q in (6) and ?q2 in (7))
? 2q (Z ) = ?2q ,
EZ n ?
n
and
EZ n ??q2 (Z n ) = ?q2 + E2Z m q(Z1 , . . . , Zm ).
We have the following main result.
Theorem 3 (Empirical Bernstein Inequalities/Bounds). With probability at least 1 ? ? over Z n ,
s
? 2q
2?
4
5
4
0 ?
0
?q (Z ) ?
ln +
ln .
(12)
EZ n Uq (Z n ) ? U
n
(n/m) ?
(n/m) ?
And, also, with probability at least 1 ? ?, (bm is the same as in (7))
s
?
2m?
?q2
8 5 m + bm 8
0 ?
0
?
E
)
?
U
(Z
)
?
ln
+
ln .
U
(Z
Zn q n
q
n
(n/m) ?
(n/m)
?
(13)
Proof. We provide the proof of (12) for the upper bound of the confidence interval; the same reasoning carries over to prove the lower bound. The proof of (13) is very similar.
? 2q :
First, let us call Q the kernel of ?
2
Q(Z1 , . . . , Z2m ) := (q(Z1 , . . . , Zm ) ? q(Zm+1 , . . . , Z2m )) .
4
Q is of order 2m, has range [0, 1] but it is not necessarily symmetric. An equivalent symmetric
? 2q is Qsym :
kernel for ?
X
2
1
Qsym (Z1 , . . . , Z2m ) :=
q(Z?(1) , . . . , Z?(m) ) ? q(Z?(m+1) , . . . , Z?(2m) )
(2m)!
??P2m
where Pm is the set of all the permutations over {1, . . . , m}. This kernel is symmetric (and has
range [0, 1]) and Theorem 2 can be applied to bound ?2 as follows: with prob. at least 1 ? ?
s
2
2
0
0
2
2
2
? (Z ) ? ?
? (Z ) + 2V(Qsym ) ln 2 +
? = EZ 02m Qsym (Z 2m ) = EZ 0n ?
ln ,
q
q
n
n
(n/2m)
?
3(n/2m) ?
where V(Qsym ) is the variance of Qsym . As Qsym has range [0, 1],
V(Qsym ) = EQ2sym ? E2 Qsym ? EQ2sym ? EQsym = ?2 ,
and therefore
s
2
? ?
? 2q (Z )
?
n
+
2
4
2
4?2
ln +
ln .
(n/m) ?
3(n/m) ?
(To establish (13) we additionally use ?
?q2 (Z n ) ? ?q2 ).
?
2
p
Following the approach of [13], we introduce
?2 ? (m/n) ln(2/?) and we get
?
2
p
2
7
? 2q (Z ) +
ln ,
?2 ? (m/n) ln(2/?) ? ?
n
3(n/m) ?
p
?
?
?
and taking the square root of both side, using 1 + 7/3 < 3 and a + b ? a + b again gives
s
q
?
1
2
2
2
?
? ? ?q (Z n ) + 3
ln .
(n/m) ?
?q (Z 0 )?U
?q (Z )|, and plug in the latter equation, adjusting
We now apply Theorem 2 to bound |EZ 0n U
n
n
? to ?/2 so the obtained inequality still holds with probability 1 ? ?. Bounding appropriate constants
gives the desired result.
Remark 4. In addition to providing an empirical Bernstein bound for U-statistics based on arbitrary
bounded kernels, our result differs from that of Maurer and Pontil [13] by the way we derive it. Here,
we apply the same tail inequality twice, taking advantage of the fact that estimates for the variances
we are interested in are also U-statistics. Maurer and Pontil use a tail inequality on self bounded
random variables and do not explicitly take advantage of the estimates they use being U-statistics.
3.2
Efficient Computation of the Variance Estimate for Bipartite Ranking
We have just showed how empirical Bernstein inequalities can be derived for U-statistics. The
estimates that enter into play in the presented results are U-statistics with kernels of order 2m (or
2m ? 1), meaning that a direct approach to practically compute them would scale as O(n2m ) (or
O(n2m?1 )). This scaling might be prohibitive as soon as n gets large.
? 2q (a similar reasoning carries over for
Here, we propose an efficient way of evaluating the estimate ?
2
?
?q ) in the special case where Y = {?1, +1} and the kernel qf induces the misranking loss (1):
qf ((x, y), (x0 , y 0 )) := 1{(y?y0 )(f (x)?f (x0 ))<0} , ?f ? RX ,
which is a symmetric kernel of order m = 2 with range [0, 1]. In other words, we address the
bipartite ranking problem. We have the following result.
? 2q ). ?n, the computation of
Proposition 1 (Efficient computation of ?
f
2
X
1
? q (zn ) =
1
?
1
?
f
{(yi1 ?yi2 )(f (xi1 )?f (xi2 ))<0}
{(yi3 ?yi4 )(f (xi3 )?f (xi4 ))<0}
|A4n |
4
i?An
can be performed in O(n ln n).
5
? 2q (zn ). To simplify the reading, we
Proof. We simply provide an algorithmic way to compute ?
f
replace i1 , i2 , i3 , i4 by i, j, k, l, respectively. We also drop the normalization factor |A4n |?1 (hence
the use of ? instead of = in the first line below). We have
? 2q (zn ) ?
?
f
X
X
(qf (zi , zj ) ? qf (zk , zl ))2 =
i,j,k,l
i6=j6=k6=l
qf2 (zi , zj ) ? 2qf (zi , zj )qf (zk , zl ) + qf2 (zk , zl ) ,
i,j,k,l
i6=j6=k6=l
= 2(n ? 2)(n ? 3)
X
qf (zi , zj ) ? 2
i,j
X
qf (zi , zj )qf (zk , zl ) since
2
qf
=qf
qf (z,z)=0
.
i,j,k,l
i6=j6=k6=l
The first term of the last line is proportional to the well-known Wilcoxon-Mann-Whitney statistic
[9]. There exist efficient ways (O(n ln n)) to compute it, based on sorting the values of the f (xi )?s.
We show how to deal with the second term, using sorting arguments as well. Note that
!2
X
qf (zi , zj )qf (zk , zl ) =
X
?4
qf (zi , zj )
i,j
i,j,k,l
i6=j6=k6=l
X
qf (zi , zj )qf (zi , zk ) ? 2
X
qf2 (zi , zj ).
i,j
i6=j6=k
P
We have subtracted from the square of i,j qf (zi , zj ) all the products qf (zi , zj )qf (zk , zl ) such that
exactly one of the variables appears both in qf (zi , zj ) and qf (zk , zl ), which happens when i = k,
i = l, j = k, j = l; using the symmetry of qf then provides the second term (together with the factor
4). We also have subtracted all the products qf (zi , zj )qf (zk , zl ) where i = k and j = l or i = l and
j = k, in which case the product reduces to qf2 (zi , zj ) (hence the factor 2) ? this gives the last term.
P
Thus (using qf2 = qf ), defining R(zn ) := ij qf (zi , zj ) and doing some simple calculations:
?
?
X
1
2
2
? q (zn ) =
??2R (zn ) + 2(n ? 5n + 8)R(zn ) + 8
?
qf (zi , zj )qf (zi , zk )?
f
|A4n |
(14)
i6=j6=k
The only term that now requires special care is the last one (which is proportional to ?
?q2f (zn )).
Recalling that qf (zi , zj ) = 1{(yi ?yj )(f (xi )?f (xj ))<0} , we observe that
yi = ?1, yj = yk = +1 and f (xi ) > f (xj ), f (xk ), or
qf (zi , zj )qf (zi , zk ) = 1 ?
yi = +1, yj = yk = ?1 and f (xi ) < f (xj ), f (xk ).
(15)
Let us define E + (i) and E ? (i) as
E + (i) := {j : yj = ?1, f (xj ) > f (xi )} , and E ? (i) := {j : yj = +1, f (xj ) < f (xi )} .
?
+
?
and their sizes ?+
i := |E (i)|, and ?i := |E (i)|.
For i such that yi = 1, ?+
i is the number of negative instances that have been scored higher than xi
by f . From (15), we see that the contribution of i to the last term of (14) corresponds to the number
?
+
+
?+
i (?i ? 1) of ordered pairs of indices in E (i) (similarly for ?i , with yi = ?1). Henceforth:
X
qf (zi , zj )qf (zi , zk ) =
i6=j6=k
X
+
?+
i (?i ? 1) +
i:yi =+1
X
?
??
i (?i ? 1).
i:yi =?1
A simple way to compute the first sum (on i such that yi = +1) is to sort and visit the data by
descending order of scores and then to incrementally compute the ?+
i ?s and the corresponding sum:
when a negative instance is encountered, ?+
is
incremented
by
1
and
when a positive instance is
i
+
visited, ?+
(?
?
1)
is
added
to
the
current
sum.
An
identical
reasoning
works for the second sum.
i
i
? q is therefore that of sorting the scores, which has cost O(n ln n).
The cost of computing ?
f
4
Applications and Discussion
Here, we mention potential applications of the new empirical inequalities we have just presented.
6
Bernstein vs Hoeffding
Banana dataset
0.5
2
?Hoeffding
?Bernstein
0.4
1
0.3
0
0.2
-1
0.1
-2
-1
0
1
0
102
2
103
104
Figure 2: Left: UCI banana dataset, data labelled +1 (?1) in red (green). Right: half the confidence
interval of the Hoeffding bound and that of the empirical Bernstein bound as functions of ntest .
4.1
Test Set Bounds
A direct use of the empirical Bernstein inequalities is to draw test set bounds. In this scenario, a
sample Z n is split into a training set Z train := Z 1:ntrain of ntrain data and a hold-out set Z test :=
Z ntrain +1:n of size ntest . Z train is used to train a model f that minimizes an empirical risk based on a
U-statistic inducing loss (such as in (1) or (2)) and Z test is used to compute a confidence interval on
the expected risk of f . For instance, if we consider the bipartite ranking problem, the loss is `rank ,
the corresponding kernel is qf (Z, Z 0 ) = `rank (f, Z, Z 0 ), and, with probability at least 1 ? ?
s
? 2q (Z ) ln(4/?)
4?
10
4
test
f
? `rank (f, Z ) +
R`rank (f ) ? R
+
ln ,
(16)
test
ntest
ntest ?
? 2q (Z ) is naturally the empirical variance of qf computed on Z .
where ?
test
test
f
Figure 2 displays the behavior of such test set bounds as ntest grows for the UCI banana dataset. To
produce this plot, we have learned a linear scoring function f (?) = hw, ?i by minimizing
X
2
?kwk2 +
(1 ? (Yi ? Yj )hw, Xi ? Xj i)
i6=j
for ? = 1.0. Of course, a purely linear scoring function would not make it possible to achieve
good ranking accuracy so we in fact work in the reproducing kernel hilbert space associated with the
Gaussian kernel k(x, x0 ) = exp(?kx?x0 k2 /2). We train our scoring function on ntrain = 1000 data
points and evaluate the test set bound on ntest = 100, 500, 1000, 5000, 10000 data points. Figure 2
(right) reports the size of half the confidence interval of the Hoeffding bound (5) and that of the
empirical Bernstein bound given in (16). Just as in the situation described in Example 1, the use of
variance information gives rise to smaller confidence intervals, even for moderate sizes of test sets.
4.2
Online Ranking and Empirical Racing Algorithms
Another application that we would like to describe is online bipartite ranking. Due to space limitation, we only provide the main ideas on how we think our empirical tail inequalities and the efficient
computation of the variance estimates we propose might be particularly useful in this scenario.
First, let us precise what we mean by online bipartite ranking. Obviously, this means that Y =
{?1, +1} and that the loss of interest is `rank . In addition, it means that given a training set Z =
{Zi := (Xi , Yi )}ni=1 the learning procedure will process the data of Z incrementally to give rise to
hypotheses f1 , f2 , . . . , fT . As `rank entails a kernel of order m = 2, we assume that n = 2T and
that we process the data from Z pairs by pairs, i.e. (Z1 , Z2 ) are used to learn f1 , (Z3 , Z4 ) and f1
are used to learn f2 and, more generally, (Z2t?1 , Z2t ) and ft?1 are used to produce ft (there exist
more clever ways to handle the data but this goes out of the scope of the present paper). We do not
specify any learning algorithm but we may imagine trying to minimize a penalized empirical risk
based on the surrogate loss `sur : if linear functions f (?) = hw, ?i are considered and a penalization
7
like kwk2 is used then the optimization problem to solve is of the same form as in the batch case:
X
2
?kwk2 +
(1 ? (Yi ? Yj )hw, Xi ? Xj i) ,
i6=j
but is solved incrementally here. Rank-1 update formulas for inverses of matrices easily provide
means to incrementally solve this problem as new data arrive (this is the main reason why we have
mentioned this surrogate function).
As evoked by [5], a nice feature of online learning is that the expected risk of hypothesis ft can
be estimated on the n ? 2t examples of Z it was not trained on. Namely, when 2? data have been
processed, there exist ? hypotheses f1 , . . . , f? and, for t < ? , with probability at least 1 ? ?:
s
? 2q (Z
2?
5
4
2t:2? ) ln(4/?)
f
? `rank (ft , Z
+
ln .
R`rank (ft ) ? R
2t:2? )) ?
? ?t
? ?t ?
If one wants to have these confidence intervals to simultaneously hold for all t and all ? with probability 1 ? ?, basic computations to calculate the number of pairs (t, ? ), with 1 ? t < ? ? n show
that it suffices to adjust ? to 4?/(n + 1)2 . Hence, with probability at least 1 ? ?: ?1 ? t < ? ? n,
s
? 2q (Z
4?
10
n+1
2t:2? ) ln((n + 1)/?)
f
? `rank (ft , Z
+
ln
.
(17)
R`rank (ft ) ? R
2t:2? )) ?
? ?t
? ?t
?
We would like to draw the attention of the reader on two features: one has to do with statistical
considerations and the other with algorithmic ones. First, if the confidence intervals simultaneously
hold for all t and all ? as in (17), it is possible, as the online learning process goes through, to discard
the hypotheses ft which have their lower bound (according to (17)) on R`rank (ft ) that is higher
than the upper bound (according to (17) as well) on R`rank (ft0 ) for some other hypothesis ft0 . This
corresponds to a racing algorithm as described in [12]. Theoretically analyzing the relevance of such
a race can be easily done with the results of [14], which deal with empirical Bernstein racing, but for
non-U-statistics. This full analysis will be provided in a long version of the present paper. Second, it
is algorithmically possible to preserve some efficiency in computing the various variance estimates
through the online learning process: these computations rely on sorting arguments, and it is possible
to take advantage of structures like binary search trees such as AVL trees, that are precisely designed
to efficiently maintain and update sorted lists of numbers. The remaining question is whether it is
possible to have shared such structures to summarize the sorted lists of scores for various hypotheses
(recall that the scores are computed on the same data). This will be the subject of further research.
5
Conclusion
We have proposed new empirical Bernstein inequalities designed for U-statistics. They generalize
the empirical inequalities of [13] and [3] while they merely result from two applications of the same
non-empirical tail inequality for U-statistics. We also show how, in the bipartite ranking situation,
the empirical variance can be efficiently computed. We mention potential applications, with illustrative results for the case of test set bounds in the realm of bipartite ranking. In addition to the possible
extensions discussed in the previous section, we wonder whether it is possible to draw similar empirical inequalities for other types of rich statistics such as, e.g., linear rank statistics [8]. Obviously, we
plan to work on establishing generalization bounds derived from the new concentration inequalities
presented. This would require to carefully define a sound notion of capacity for U-statistic-based
classes of functions (inspired, for example, from localized Rademacher complexities). Such new
bounds would be compared with those proposed in [1, 6, 7, 15] for the bipartite ranking and/or pairwise classification problems. Finally, we also plan to carry out intensive simulations ?in particular
for the task of online ranking? to get even more insights on the relevance of our contribution.
Acknowledgments
This work is partially supported by the IST Program of the EC, under the FP7 Pascal 2 Network of
Excellence, ICT-216886-NOE. LR is partially supported by the ANR project ASAP.
8
References
[1] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization Bounds for
the Area under the ROC Curve. Journal of Machine Learning Research, 6:393?425, 2005.
[2] M. A. Arcones. A bernstein-type inequality for u-statistics and u-processes. Statistics &
probability letters, 22(3):239?247, 1995.
[3] J.-Y. Audibert, R. Munos, and C. Szepesv?ari. Tuning bandit algorithms in stochastic environments. In ALT ?07: Proceedings of the 18th international conference on Algorithmic Learning
Theory, pages 150?165, Berlin, Heidelberg, 2007. Springer-Verlag.
[4] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification : A survey of some recent
advances. ESAIM. P&S, 9:323?375, 2005.
[5] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of online learning
algorithms. IEEE Transactions on Information Theory, 50(9):2050?2057, 2004.
[6] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of u -statistics.
The Annals of Statistics, 36(2):844?874, April 2008.
[7] Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer. An efficient boosting algorithm for combining
preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[8] J. H?ajek and Z. Sid?ak. Theory of Rank Tests. Academic Press, 1967.
[9] J. A. Hanley and B. J. Mcneil. The meaning and use of the area under a receiver operating
characteristic (roc) curve. Radiology, 143(1):29?36, April 1982.
[10] W. Hoeffding. A Class of Statistics with Asymptotically Normal Distribution. Annals of
Mathematical Statistics, 19(3):293?325, 1948.
[11] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the
American Statistical Association, 58(301):13?30, 1963.
[12] O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Adv. in Neural Information Processing Systems NIPS 93,
pages 59?66, 1993.
[13] A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In
COLT 09: Proc. of The 22nd Annual Conference on Learning Theory, 2009.
[14] V. Mnih, C. Szepesv?ari, and J.-Y. Audibert. Empirical bernstein stopping. In ICML ?08:
Proceedings of the 25th international conference on Machine learning, pages 672?679, New
York, NY, USA, 2008. ACM.
[15] C. Rudin and R. E. Schapire. Margin-based ranking and an equivalence between AdaBoost
and RankBoost. Journal of Machine Learning Research, 10:2193?2232, Oct 2009.
[16] R. J. Serfling. Approximation theorems of mathematical statistics. J. Wiley & Sons, 1980.
9
| 4081 |@word cmi:1 version:4 briefly:1 arcones:4 nd:1 qsym:9 d2:1 simulation:1 bn:2 decomposition:1 mention:2 carry:3 score:4 bc:1 existing:1 current:1 z2:2 tackling:1 liva:2 plot:3 drop:1 update:2 designed:2 v:1 half:2 prohibitive:1 rudin:1 ntrain:4 xk:2 yi1:1 anthoine:2 lr:1 characterization:1 provides:4 boosting:1 herbrich:1 preference:1 mathematical:2 direct:4 retrieving:1 prove:1 introduce:2 theoretically:1 excellence:1 pairwise:1 x0:7 expected:3 behavior:2 growing:1 inspired:1 provided:2 project:1 bounded:6 notation:5 qmin:2 lowest:1 what:1 minimizes:1 q2:12 noe:1 exactly:1 universit:3 qm:2 k2:1 zl:8 normally:1 positive:1 ak:1 analyzing:1 establishing:1 abuse:1 lugosi:2 might:4 twice:1 studied:1 evoked:1 equivalence:1 z2t:2 range:6 acknowledgment:1 yj:8 practice:1 differs:1 procedure:2 pontil:3 area:2 empirical:37 thought:1 word:2 confidence:10 get:3 clever:1 selection:1 ralaivola:2 risk:5 influence:1 descending:1 measurable:1 equivalent:1 roth:1 go:4 straightforward:1 attention:1 emenc:1 convex:1 survey:1 insight:1 handle:1 notion:1 annals:2 imagine:1 play:1 hypothesis:6 expensive:1 particularly:2 racing:4 ft:10 solved:1 calculate:1 adv:1 ordering:1 incremented:1 marseille:6 yk:2 mentioned:1 environment:1 complexity:2 peled:1 trained:1 solving:1 purely:1 upon:1 bipartite:11 f2:2 efficiency:1 easily:2 various:3 talk:1 train:4 univ:3 describe:2 effective:1 refined:1 richer:1 solve:2 say:1 anr:1 ability:1 statistic:39 think:1 transform:1 radiology:1 online:10 obviously:3 n2m:2 advantage:4 propose:3 product:3 fr:3 zm:9 uci:2 combining:1 achieve:1 inducing:1 rademacher:1 produce:2 object:1 derive:3 illustrate:1 ij:1 eq:6 implies:1 peculiarity:1 stochastic:1 mann:1 require:1 suffices:2 generalization:4 f1:4 proposition:1 probable:1 elementary:1 im:2 extension:1 hold:6 mm:1 practically:1 considered:3 normal:1 exp:4 algorithmic:4 scope:1 proc:1 visited:1 tool:2 minimization:1 gaussian:1 rankboost:1 i3:1 rather:1 fulfill:1 derived:3 zim:5 rank:18 bernoulli:1 am:2 dependent:1 stopping:1 cnrs:1 relation:1 bandit:1 france:3 interested:2 i1:3 classification:4 among:1 pascal:1 colt:1 k6:4 plan:2 lif:4 special:6 equal:1 identical:1 icml:1 report:1 simplify:3 few:1 simultaneously:2 preserve:1 joliot:3 yi3:1 maintain:1 recalling:1 peel:2 interest:2 mnih:1 adjust:1 introduces:1 har:1 accurate:3 capable:1 tree:2 maurer:3 desired:1 instance:4 whitney:1 zn:10 cost:2 deviation:1 uniform:1 wonder:1 a2m:2 characterize:2 dependency:1 international:2 sequel:1 xi1:1 aix:3 together:1 again:1 nm:1 cesa:1 hoeffding:16 henceforth:1 american:1 reusing:1 potential:3 subsumes:1 explicitly:1 ranking:20 race:2 audibert:2 performed:1 later:1 view:1 root:1 doing:1 red:1 sort:1 curie:3 contribution:2 minimize:1 square:2 ni:3 qf2:5 accuracy:1 variance:21 characteristic:1 efficiently:2 generalize:2 sid:1 mc:2 rx:1 j6:7 definition:3 thereof:1 e2:1 naturally:2 sandrine:1 proof:5 associated:1 proved:1 adjusting:1 dataset:3 recall:3 realm:2 knowledge:2 organized:1 hilbert:1 graepel:1 carefully:1 appears:3 focusing:1 higher:2 adaboost:1 specify:1 april:2 done:1 generality:1 just:4 hand:3 incrementally:4 maron:1 brings:1 indicated:1 grows:1 usa:1 name:1 hence:5 symmetric:7 boucheron:1 ajek:1 moore:1 i2:1 deal:3 self:1 essence:1 illustrative:1 ln2:1 trying:1 misranking:4 reasoning:3 meaning:2 consideration:2 ari:2 attached:1 tail:14 slight:1 discussed:1 resting:1 association:1 kwk2:3 refer:1 enter:1 rd:1 tuning:1 pm:3 i6:10 similarly:1 z4:1 entail:2 vals:1 operating:1 xi3:1 wilcoxon:1 showed:1 recent:1 belongs:1 moderate:2 discard:1 scenario:2 verlag:1 inequality:45 binary:1 yi:14 scoring:6 gentile:1 care:2 mr:3 ii:4 multiple:1 full:1 sound:1 reduces:2 academic:1 plug:1 calculation:1 long:1 visit:1 qi:1 regression:1 basic:2 expectation:2 kernel:17 normalization:1 agarwal:1 vayatis:1 addition:5 want:2 background:1 szepesv:2 interval:9 rest:1 subject:1 z2m:3 integer:1 call:1 bernstein:28 iii:3 identically:1 easy:1 split:1 xj:8 zi:28 idea:1 intensive:1 whether:2 asap:1 accelerating:1 render:1 york:1 remark:4 generally:3 useful:1 utmost:1 induces:1 processed:1 p2m:1 reduced:1 schapire:2 exist:3 zj:19 estimated:1 algorithmically:1 xi4:1 ist:1 key:1 kqk:1 resorted:1 asymptotically:1 merely:1 mcneil:1 sum:6 prob:1 inverse:1 letter:1 qmax:2 arrive:1 reasonable:1 reader:1 draw:4 scaling:1 bound:30 display:1 encountered:1 annual:1 i4:1 precisely:2 bousquet:1 argument:5 px:1 according:4 smaller:4 son:1 y0:2 serfling:1 happens:1 ln:30 equation:1 discus:2 xi2:1 singer:1 mind:1 fp7:1 reversal:3 apply:3 observe:2 appropriate:1 uq:4 zi2:1 subtracted:2 batch:1 thomas:2 original:2 denotes:1 remaining:1 unnoticed:1 cf:1 hanley:1 especially:2 establish:1 question:2 quantity:5 added:1 concentration:3 surrogate:4 said:1 exhibit:1 berlin:1 capacity:2 argue:1 reason:1 length:1 sur:3 index:1 z3:1 ratio:1 providing:1 minimizing:1 sharper:1 negative:2 rise:3 perform:1 bianchi:1 upper:5 defining:1 zi1:4 situation:2 precise:1 banana:3 reproducing:1 sharp:1 arbitrary:1 community:1 pair:5 namely:1 z1:12 learned:1 nip:1 address:1 below:3 reading:2 summarize:1 program:1 green:1 misclassification:1 rely:3 predicting:1 esaim:1 deviate:1 nice:1 ict:1 ez2:1 loss:14 freund:1 permutation:1 interesting:1 limitation:1 proportional:2 localized:1 penalization:2 qf:37 course:2 penalized:1 supported:2 last:5 copy:2 soon:1 side:2 taking:3 munos:1 distributed:2 curve:2 evaluating:1 avl:1 rich:1 bm:6 ec:1 yi4:1 transaction:1 receiver:1 assumed:1 xi:13 search:2 why:1 additionally:1 learn:2 zk:12 symmetry:1 heidelberg:1 necessarily:1 ft0:2 cl:1 rue:3 main:6 yi2:1 motivation:1 bounding:1 scored:1 pivotal:1 roc:2 ny:1 wiley:1 explicit:1 lie:1 hw:4 theorem:6 formula:1 list:2 pz:2 alt:1 importance:1 iyer:1 kx:1 margin:1 sorting:5 simply:1 ez:19 expressed:2 ordered:1 conconi:1 partially:2 springer:1 corresponds:2 acm:1 oct:1 conditional:1 sorted:2 labelled:1 replace:1 shared:1 specifically:1 lemma:5 called:1 ntest:6 latter:1 relevance:3 dber:3 evaluate:1 |
3,404 | 4,082 | Copula Processes
Andrew Gordon Wilson?
Department of Engineering
University of Cambridge
[email protected]
Zoubin Ghahramani?
Department of Engineering
University of Cambridge
[email protected]
Abstract
We define a copula process which describes the dependencies between arbitrarily
many random variables independently of their marginal distributions. As an example, we develop a stochastic volatility model, Gaussian Copula Process Volatility
(GCPV), to predict the latent standard deviations of a sequence of random variables. To make predictions we use Bayesian inference, with the Laplace approximation, and with Markov chain Monte Carlo as an alternative. We find our model
can outperform GARCH on simulated and financial data. And unlike GARCH,
GCPV can easily handle missing data, incorporate covariates other than time, and
model a rich class of covariance structures.
Imagine measuring the distance of a rocket as it leaves Earth, and wanting to know how these measurements correlate with one another. How much does the value of the measurement at fifteen
minutes depend on the measurement at five minutes? Once we?ve learned this correlation structure,
suppose we want to compare it to the dependence between measurements of the rocket?s velocity.
To do this, it is convenient to separate dependence from the marginal distributions of our measurements. At any given time, a rocket?s distance from Earth could have a Gamma distribution, while its
velocity could have a Gaussian distribution. And separating dependence from marginal distributions
is precisely what a copula function does.
While copulas have recently become popular, especially in financial applications [1, 2], as Nelsen
[3] writes, ?the study of copulas and the role they play in probability, statistics, and stochastic
processes is a subject still in its infancy. There are many open problems. . . ? Typically only bivariate
(and recently trivariate) copulas are being used and studied. In our introductory example, we are
interested in learning the correlations in different stochastic processes, and comparing them. It
would therefore be useful to have a copula process, which can describe the dependencies between
arbitrarily many random variables independently of their marginal distributions. We define such a
process. And as an example, we develop a stochastic volatility model, Gaussian Copula Process
Volatility (GCPV). In doing so, we provide a Bayesian framework for the learning the marginal
distributions and dependency structure of what we call a Gaussian copula process.
The volatility of a random variable is its standard deviation. Stochastic volatility models are used
to predict the volatilities of a heteroscedastic sequence ? a sequence of random variables with different variances, like distance measurements of a rocket as it leaves the Earth. As the rocket gets
further away, the variance on the measurements increases. Heteroscedasticity is especially important in econometrics; the returns on equity indices, like the S&P 500, or on currency exchanges, are
heteroscedastic. Indeed, in 2003, Robert Engle won the Nobel Prize in economics ?for methods of
analyzing economic time series with time-varying volatility?. GARCH [4], a generalized version of
Engle?s ARCH, is arguably unsurpassed for predicting the volatility of returns on equity indices and
currency exchanges [5, 6, 7]. GCPV can outperform GARCH, and is competitive on financial data
that especially suits GARCH [8, 9, 10]. Before discussing GCPV, we first introduce copulas and the
copula process. For a review of Gaussian processes, see Rasmussen and Williams [11].
?
?
http://mlg.eng.cam.ac.uk/andrew
Also at the machine learning department at Carnegie Mellon University.
1
1
Copulas
Copulas are important because they separate the dependency structure between random variables
from their marginal distributions. Intuitively, we can describe the dependency structure of any
multivariate joint distribution H(x1 , . . . , xn ) = P (X1 ? x1 , . . . Xn ? xn ) through a two step
process. First we take each univariate random variable Xi and transform it through its cumulative
distribution function (cdf) Fi to get Ui = Fi (Xi ), a uniform random variable. We then express
the dependencies between these transformed variables through the n-copula C(u1 , . . . , un ).
Formally, an n-copula C : [0, 1]n ? [0, 1] is a multivariate cdf with uniform univariate marginals:
C(u1 , u2 , . . . , un ) = P (U1 ? u1 , U2 ? u2 , . . . , Un ? un ), where U1 , U2 , . . . , Un are standard
uniform random variables. Sklar [12] precisely expressed our intuition in the theorem below.
Theorem 1.1. Sklar?s theorem
Let H be an n-dimensional distribution function with marginal distribution functions F1 , F2 , . . . , Fn .
Then there exists an n-copula C such that for all (x1 , x2 , . . . , xn ) ? [??, ?]n ,
H(x1 , x2 , . . . , xn ) = C(F1 (x1 ), F2 (x2 ), . . . , Fn (xn )) = C(u1 , u2 , . . . , un ).
(1)
If F1 , F2 , . . . , Fn are all continuous then C is unique; otherwise C is uniquely determined on
Range F1 ? Range F2 ? ? ? ? ? Range Fn . Conversely, if C is an n-copula and F1 , F2 , . . . , Fn are
distribution functions, then the function H is an n-dimensional distribution function with marginal
distribution functions F1 , F2 , . . . , Fn .
(?1)
As a corollary, if Fi
(u) = inf{x : F (x) ? u}, the quasi-inverse of Fi , then for all
u1 , u2 , . . . , un ? [0, 1]n ,
(?1)
C(u1 , u2 , . . . , un ) = H(F1
(?1)
(u1 ), F2
(u2 ), . . . , Fn(?1) (un )).
(2)
In other words, (2) can be used to construct a copula. For example, the bivariate Gaussian copula is
defined as
C(u, v) = ?? (??1 (u), ??1 (v)),
(3)
where ?? is a bivariate Gaussian cdf with correlation coefficient ?, and ? is the standard univariate
Gaussian cdf. Li [2] popularised the bivariate Gaussian copula, by showing how it could be used to
study financial risk and default correlation, using credit derivatives as an example.
By substituting F (x) for u and G(y) for v in equation (3), we have a bivariate distribution H(x, y),
with a Gaussian dependency structure, and marginals F and G. Regardless of F and G, the resulting
H(x, y) can still be uniquely expressed as a Gaussian copula, so long as F and G are continuous. It is
then a copula itself that captures the underlying dependencies between random variables, regardless
of their marginal distributions. For this reason, copulas have been called dependence functions
[13, 14]. Nelsen [3] contains an extensive discussion of copulas.
2
Copula Processes
Imagine choosing a covariance function, and then drawing a sample function at some finite number
of points from a Gaussian process. The result is a sample from a collection of Gaussian random
variables, with a dependency structure encoded by the specified covariance function. Now, suppose
we transform each of these values through a univariate Gaussian cdf, such that we have a sample
from a collection of uniform random variables. These uniform random variables also have this
underlying Gaussian process dependency structure. One might call the resulting values a draw from
a Gaussian-Uniform Process. We could subsequently put these values through an inverse beta cdf,
to obtain a draw from what could be called a Gaussian-Beta Process: the values would be a sample
from beta random variables, again with an underlying Gaussian process dependency structure. We
could also transform the uniform values with different inverse cdfs, which would give a sample from
different random variables, with dependencies encoded by the Gaussian process.
The above procedure is a means to generate samples from arbitrarily many random variables, with
arbitrary marginal distributions, and desired dependencies. It is an example of how to use what we
call a copula process ? in this case, a Gaussian copula process, since a Gaussian copula describes
the dependency structure of a finite number of samples. We now formally define a copula process.
2
Definition 2.1. Copula Process
Let {Wt } be a collection of random variables indexed by t ? T , with marginal distribution functions
Ft , and let Qt = Ft (Wt ). Further, let ? be a stochastic process measure with marginal distribution
functions Gt , and joint distribution function H. Then Wt is copula process distributed with base
measure ?, or Wt ? CP(?), if and only if for all n ? N, ai ? R,
P(
n
\
(?1)
{Gti
(Qti ) ? ai }) = Ht1 ,t2 ,...,tn (a1 , a2 , . . . , an ).
(4)
i=1
(?1)
Each Qti ? Uniform(0, 1), and Gti
is the quasi-inverse of Gti , as previously defined.
Definition 2.2. Gaussian Copula Process
Wt is Gaussian copula process distributed if it is copula process distributed and the base measure
? is a Gaussian process. If there is a mapping ? such that ?(Wt ) ? GP(m(t), k(t, t0 )), then we
write Wt ? GCP(?, m(t), k(t, t0 )).
For example, if we have Wt ? GCP with m(t) = 0 and k(t, t) = 1, then in the definition of a copula
process, Gt = ?, the standard univariate Gaussian cdf, and H is the usual GP joint distribution
function. Supposing this GCP is a Gaussian-Beta process, then ? = ??1 ? FB , where FB is a
univariate Beta cdf. One could similarly define other copula processes.
We described generally how a copula process can be used to generate samples of arbitrarily many
random variables with desired marginals and dependencies. We now develop a specific and practical
application of this framework. We introduce a stochastic volatility model, Gaussian Copula Process
Volatility (GCPV), as an example of how to learn the joint distribution of arbitrarily many random
variables, the marginals of these random variables, and to make predictions. To do this, we fit a
Gaussian copula process by using a type of Warped Gaussian Process [15]. However, our methodology varies substantially from Snelson et al. [15], since we are doing inference on latent variables
as opposed to observations, which is a much greater undertaking that involves approximations, and
we are doing so in a different context.
3
Gaussian Copula Process Volatility
Assume we have a sequence of observations y = (y1 , . . . , yn )> at times t = (t1 , . . . , tn )> . The
observations are random variables with different latent standard deviations. We therefore have n
unobserved standard deviations, ?1 , . . . , ?n , and want to learn the correlation structure between
these standard deviations, and also to predict the distribution of ?? at some unrealised time t? .
We model the standard deviation function as a Gaussian copula process:
?t ? GCP(g ?1 , 0, k(t, t0 )).
(5)
f (t) ? GP(m(t) = 0, k(t, t0 ))
?(t) = g(f (t), ?)
(6)
(7)
y(t) ? N (0, ? 2 (t)),
(8)
Specifically,
where g is a monotonic warping function, parametrized by ?. For each of the observations y =
(y1 , . . . , yn )> we have corresponding GP latent function values f = (f1 , . . . , fn )> , where ?(ti ) =
g(fi , ?), using the shorthand fi to mean f (ti ).
?t ? GCP, because any finite sequence (?1 , . . . , ?p ) is distributed as a Gaussian copula:
P (?1 ? a1 , . . . , ?p ? ap ) = P (g ?1 (?1 ) ? g ?1 (a1 ), . . . , g ?1 (?p ) ? g ?1 (ap ))
= ?? (g
?1
(a1 ), . . . , g
?1
?1
(ap )) = ?? (?
= ?? (?
?1
(F (a1 )), . . . , ?
(u1 ), . . . , ?
?1
?1
(9)
(F (ap )))
(up )) = C(u1 , . . . , up ),
where ? is the standard univariate Gaussian cdf (supposing k(t, t) = 1), ?? is a multivariate Gaussian cdf with covariance matrix ?ij = cov(g ?1 (?i ), g ?1 (?j )), and F is the marginal distribution of
3
each ?i . In (5), we have ? = g ?1 , because it is g ?1 which maps ?t to a GP. The specification in
(5) is equivalently expressed by (6) and (7). With GCPV, the form of g is learned so that g ?1 (?t )
is best modelled by a GP. By learning g, we learn the marginal of each ?: F (a) = ?(g ?1 (a)) for
a ? R. Recently, a different sort of ?kernel copula process? has been used, where the marginals
of the variables being modelled are not learned [16].1 Further, we also consider a more subtle and
flexible form of our model, where the function g itself is indexed by time: g = gt (f (t), ?). We only
assume that the marginal distributions of ?t are stationary over ?small? time periods, and for each of
these time periods (5)-(7) hold true. We return to this in the final discussion section.
Here we have assumed that each observation, conditioned on knowing its variance, is normally
distributed with zero mean. This is a common assumption in heteroscedastic models. The zero
mean and normality assumptions can be relaxed and are not central to this paper.
4
Predictions with GCPV
Ultimately, we wish to infer p(?(t? )|y, z), where z = {?, ?}, and ? are the hyperparameters of the
GP covariance function. To do this, we sample from
Z
p(f? |y, z) = p(f? |f , ?)p(f |y, z)df
(10)
and then transform these samples by g. Letting (Cf )ij = ?ij g(fi , ?)2 , where ?ij is the Kronecker
delta, Kij = k(ti , tj ), (k? )i = k(t? , ti ), we have
p(f |y, z) = N (f ; 0, K)N (y; 0, Cf )/p(y|z),
p(f? |f , ?) =
N (k?> K ?1 f , k(t? , t? )
?
(11)
k?> K ?1 k? ).
(12)
We also wish to learn z, which we can do by finding the z? that maximizes the marginal likelihood,
Z
p(y|z) = p(y|f , ?)p(f |?)df .
(13)
Unfortunately, for many functions g, (10) and (13) are intractable. Our methods of dealing with
this can be used in very general circumstances, where one has a Gaussian process prior, but an
(optionally parametrized) non-Gaussian likelihood. We use the Laplace approximation to estimate
p(f |y, z) as a Gaussian. Then we can integrate (10) for a Gaussian approximation to p(f? |y, z),
which we sample from to make predictions of ?? . Using Laplace, we can also find an expression
for an approximate marginal likelihood, which we maximize to determine z. Once we have found
z with Laplace, we use Markov chain Monte Carlo to sample from p(f? |y, z), and compare that to
using Laplace to sample from p(f? |y, z). In the supplement we relate this discussion to (9).
4.1
Laplace Approximation
The goal is to approximate (11) with a Gaussian, so that we can evaluate (10) and (13) and make
predictions. In doing so, we follow Rasmussen and Williams [11] in their treatment of Gaussian
process classification, except we use a parametrized likelihood, and modify Newton?s method.
First, consider as an objective function the logarithm of an unnormalized (11):
s(f |y, z) = log p(y|f , ?) + log p(f |?).
(14)
The Laplace approximation uses a second order Taylor expansion about the f? which maximizes
(14), to find an approximate objective s?(f |y, z). So the first step is to find f?, for which we use
Newton?s method. The Newton update is f new = f ? (??s(f ))?1 ?s(f ). Differentiating (14),
?s(f |y, z) = ? log p(y|f , ?) ? K ?1 f
??s(f |y, z) = ?? log p(y|f , ?) ? K
?1
= ?W ? K
?1
,
where W is the diagonal matrix ??? log p(y|f , ?).
1
Note added in proof : Also, for a very recent related model, see Rodr??guez et al. [17].
4
(15)
(16)
If the likelihood function p(y|f , ?) is not log concave, then W may have negative entries. Vanhatalo et al. [18] found this to be problematic when doing Gaussian process regression with a Student-t
likelihood. They instead use an expectation-maximization (EM) algorithm for finding f?, and iterate
ordered rank one Cholesky updates to evaluate the Laplace approximate marginal likelihood. But
EM can converge slowly, especially near a local optimum, and each of the rank one updates is vulnerable to numerical instability. With a small modification of Newton?s method, we often get close to
quadratic convergence for finding f?, and can evaluate the Laplace approximate marginal likelihood
in a numerically stable fashion, with no approximate Cholesky factors, and optimal computational
requirements. Some comments are in the supplementary material but, in short, we use an approximate negative Hessian, ???s ? M + K ?1 , which is guaranteed to be positive definite, since M
is formed on each iteration by zeroing the negative entries of W . For stability, we reformulate our
1
1
1
1
optimization in terms of B = I + M 2 KM 2 , and let Q = M 2 B ?1 M 2 , b = M f + ? log p(y|f ),
a = b ? QKb. Since (K ?1 + M )?1 = K ? KQK, the Newton update becomes f new = Ka.
With these updates we find f? and get an expression for s? which
R we use to approximate (13) and
(11). The approximate marginal likelihood q(y|z) is given by exp(?
s)df . Taking its logarithm,
1 ?>
1
log q(y|z) = ? f af? + log p(y|f?) ? log |Bf?|,
(17)
2
2
where Bf? is B evaluated at f?, and af? is a numerically stable evaluation of K ?1 f?.
To learn the parameters z, we use conjugate gradient descent to maximize (17) with respect to z.
Since f? is a function of z, we initialize z, and update f? every time we vary z. Once we have found
an optimum z?, we can make predictions. By exponentiating s?, we find a Gaussian approximation to
the posterior (11), q(f |y, z) = N (f?, K ? KQK). The product of this approximate posterior with
p(f? |f ) is Gaussian. Integrating this product, we approximate p(f? |y, z) as
q(f? |y, z) = N (k?> ? log p(y|f?), k(t? , t? ) ? k?> Qk? ).
(18)
Given n training observations, the cost of each Newton iteration is dominated by computing the
cholesky decomposition of B, which takes O(n3 ) operations. The objective function typically
changes by less than 10?6 after 3 iterations. Once Newton?s method has converged, it takes only
O(1) operations to draw from q(f? |y, z) and make predictions.
4.2
Markov chain Monte Carlo
We use Markov chain Monte Carlo (MCMC) to sample from (11), so that we can later sample from
p(?? |y, z) to make predictions. Sampling from (11) is difficult, because the variables f are strongly
coupled by a Gaussian process prior. We use a new technique, Elliptical Slice Sampling [19], and
find it extremely effective for this purpose. It was specifically designed to sample from posteriors
with correlated Gaussian priors. It has no free parameters, and jointly updates every element of f .
For our setting, it is over 100 times as fast as axis aligned slice sampling with univariate updates.
To make predictions, we take J samples of p(f |y, z), {f 1 , . . . , f J }, and then approximate (10) as
a mixture of J Gaussians:
J
1X
p(f? |f i , ?).
(19)
p(f? |y, z) ?
J i=1
Each of the Gaussians in this mixture have equal weight. So for each sample of f? |y, we uniformly
choose a random p(f? |f i , ?) and draw a sample. In the limit J ? ?, we are sampling from the
exact p(f? |y, z). Mapping these samples through g gives samples from p(?? |y, z). After one O(n3 )
and one O(J) operation, a draw from (19) takes O(1) operations.
4.3
Warping Function
The warping function, g, maps fi , a GP function value, to ?i , a standard deviation. Since fi can take
any value in R, and ?i can take any non-negative real value, g : R ? R+ . For each fi to correspond
to a unique deviation, g must also be one-to-one. We use
g(x, ?) =
K
X
aj log[exp[bj (x + cj )] + 1],
j=1
5
aj , bj > 0.
(20)
This is monotonic, positive, infinitely differentiable, asymptotic towards zero as x ? ??, and
PK
tends to ( j=1 aj bj )x as x ? ?. In practice, it is useful to add a small constant to (20), to avoid
rare situations where the parameters ? are trained to make g extremely small for certain inputs, at
the expense of a good overall fit; this can happen when the parameters ? are learned by optimizing
a likelihood. A suitable constant could be one tenth the absolute value of the smallest nonzero
observation.
By inferring the parameters of the warping function, or distributions of these parameters, we are
learning a transformation which will best model ?t with a Gaussian process. The more flexible the
warping function, the more potential there is to improve the GCPV fit ? in other words, the better
we can estimate the ?perfect? transformation. To test the importance of this flexibility, we also try
a simple unparametrized warping function, g(x) = ex . In related work, Goldberg et al. [20] place
a GP prior on the log noise level in a standard GP regression model on observations, except for
inference they use Gibbs sampling, and a high level of ?jitter? for conditioning.
Once g is trained, we can infer the marginal distribution of each ?: F (a) = ?(g ?1 (a)), for a ? R.
This suggests an alternate way to initialize g: we can initialize F as a mixture of Gaussians, and
then map through ??1 to find g ?1 . Since mixtures of Gaussians are dense in the set of probability
distributions, we could in principle find the ?perfect? g using an infinite mixture of Gaussians [21].
5
Experiments
In our experiments, we predict the latent standard deviations ? of observations y at times t, and
also ?? at unobserved times t? . To do this, we use two versions of GCPV. The first variant, which
we simply refer to as GCPV, uses the warping function (20) with K = 1, and squared exponential
covariance function, k(t, t0 ) = A exp(?(t?t0 )2 /l2 ), with A = 1. The second variant, which we call
GP-EXP, uses the unparametrized warping function ex , and the same covariance function, except
the amplitude A is a trained hyperparameter. The other hyperparameter l is called the lengthscale of
the covariance function. The greater l, the greater the covariance between ?t and ?t+a for a ? R.
We train hyperparameters by maximizing the Laplace approximate log marginal likelihood (17).
We then sample from p(f? |y) using the Laplace approximation (18). We also do this using MCMC
(19) with J = 10000, after discarding a previous 10000 samples of p(f |y) as burn-in. We pass
these samples of f? |y through g and g 2 to draw from p(?? |y) and p(??2 |y), and compute the sample
mean and variance of ?? |y. We use the sample mean as a point predictor, and the sample variance
for error bounds on these predictions, and we use 10000 samples to compute these quantities. For
GCPV we use Laplace and MCMC for inference, but for GP-EXP we only use Laplace. We compare
predictions to GARCH(1,1), which has been shown in extensive and recent reviews to be competitive
with other GARCH variants, and more sophisticated models [5, 6, 7]. GARCH(p,q) specifies
y(t) ?
Pp
2
N (0, ? 2 (t)), and lets the variance be a deterministic function of the past: ?t2 = a0 + i=1 ai yt?i
+
Pq
2
b
?
.
We
use
the
Matlab
Econometrics
Toolbox
implementation
of
GARCH,
where
the
j=1 j t?j
parameters a0 , ai and bj are estimated using a constrained maximum likelihood.
We make forecasts of volatility, and we predict historical volatility. By ?historical volatility? we
mean the volatility at observed time points, or between these points. Uncovering historical volatility
is important. It could, for instance, be used to study what causes fluctuations in the stock market, or
to understand physical systems.
To evaluate our model, we use the Mean Squared Error (MSE) between the true variance, or proxy
for the truth, and the predicted variance. Although likelihood has advantages, we are limited in
space, and we wish to harmonize with the econometrics literature, and other assessments of volatility
models, where MSE is the standard. In a similar assessment of volatility models, Brownlees et al.
[7] found that MSE and quasi-likelihood rankings were comparable.
When the true variance is unknown we follow Brownlees et al. [7] and use squared observations
as a proxy for the truth, to compare our model to GARCH.2 The more observations, the more
reliable these performance estimates will be. However, not many observations (e.g. 100) are needed
for a stable ranking of competing models; in Brownlees et al. [7], the rankings derived from high
frequency squared observations are similar to those derived using daily squared observations.
2
Since each observation y is assumed to have zero mean and variance ? 2 , E[y 2 ] = ? 2 .
6
5.1
Simulations
We simulate observations from N (0, ? 2 (t)), using ?(t) = sin(t) cos(t2 ) + 1, at t =
(0, 0.02, 0.04, . . . , 4)> . We call this data set TRIG. We also simulate using a standard deviation
that jumps from 0.1 to 7 and back, at times t = (0, 0.1, 0.2, . . . , 6)> . We call this data set JUMP.
To forecast, we use all observations up until the current time point, and make 1, 7, and 30 step
ahead predictions. So, for example, in TRIG we start by observing t = 0, and make forecasts at
t = 0.02, 0.14, 0.60. Then we observe t = 0, 0.02 and make forecasts at t = 0.04, 0.16, 0.62, and
so on, until all data points have been observed. For historical volatility, we predict the latent ?t at
the observation times, which is safe since we are comparing to the true volatility, which is not used
in training; the results are similar if we interpolate. Figure 1 panels a) and b) show the true volatility for TRIG and JUMP respectively, alongside GCPV Laplace, GCPV MCMC, GP-EXP Laplace,
and GARCH(1,1) predictions of historical volatility. Table 1 shows the results for forecasting and
historical volatility.
In panel a) we see that GCPV more accurately captures the dependencies between ? at different
times points than GARCH: if we manually decrease the lengthscale in the GCPV covariance function, we can replicate the erratic GARCH behaviour, which inaccurately suggests that the covariance
between ?t and ?t+a decreases quickly with increases in a. We also see that GCPV with an unparametrized exponential warping function tends to overestimates peaks and underestimate troughs.
In panel b), the volatility is extremely difficult to reconstruct or forecast ? with no warning it will
immediately and dramatically increase or decrease. This behaviour is not suited to a smooth squared
exponential covariance function. Nevertheless, GCPV outperforms GARCH, especially in regions
of low volatility. We also see this in panel a) for t ? (1.5, 2). GARCH is known to respond slowly
to large returns, and to overpredict volatility [22]. In JUMP, the greater the peaks, and the smaller
the troughs, the more GARCH suffers, while GCPV is mostly robust to these changes.
5.2
Financial Data
The returns on the daily exchange rate between the Deutschmark (DM) and the Great Britain
Pound (GBP) from 1984 to 1992 have become a benchmark for assessing the performance of
GARCH models [8, 9, 10]. This exchange data, which we refer to as DMGBP, can be obtained
from www.datastream.com, and the returns are calculated as rt = log(Pt+1 /Pt ), where Pt is
the number of DM to GBP on day t. The returns are assumed to have a zero mean function.
We use a rolling window of the previous 120 days of returns to make 1, 7, and 30 day ahead volatility
forecasts, starting at the beginning of January 1988, and ending at the beginning of January 1992
(659 trading days). Every 7 days, we retrain the parameters of GCPV and GARCH. Every time
we retrain parameters, we predict historical volatility over the past 120 days. The average MSE
for these historical predictions is given in Table 1, although they should be observed with caution;
unlike with the simulations, the DMGBP historical predictions are trained using the same data they
are assessed on. In Figure 1c), we see that the GARCH one day ahead forecasts are lifted above
the GCPV forecasts, but unlike in the simulations, they are now operating on a similar lengthscale.
This suggests that GARCH could still be overpredicting volatility, but that GCPV has adapted its
estimation of how ?t and ?t+a correlate with one another. Since GARCH is suited to this financial
data set, it is reassuring that GCPV predictions have a similar time varying structure. Overall, GCPV
and GARCH are competitive with one another for forecasting currency exchange returns, as seen
in Table 1. Moreover, a learned warping function g outperforms an unparametrized one, and a full
Laplace solution is comparable to using MCMC for inference, in accuracy and speed. This is also
true for the simulations. Therefore we recommend whichever is more convenient to implement.
6
Discussion
We defined a copula process, and as an example, developed a stochastic volatility model, GCPV,
which can outperform GARCH. With GCPV, the volatility ?t is distributed as a Gaussian Copula
Process, which separates the modelling of the dependencies between volatilities at different times
from their marginal distributions ? arguably the most useful property of a copula. Further, GCPV fits
the marginals in the Gaussian copula process by learning a warping function. If we had simply chosen an unparametrized exponential warping function, we would incorrectly be assuming that the log
7
Table 1: MSE for predicting volatility.
Data set
Model
Historical
1 step
7 step
30 step
TRIG
GCPV (LA)
GCPV (MCMC)
GP-EXP
GARCH
0.0953
0.0760
0.193
0.938
0.588
0.622
0.646
1.04
0.951
0.979
1.36
1.79
1.71
1.76
1.15
5.12
JUMP
GCPV (LA)
GCPV (MCMC)
GP-EXP
GARCH
0.588
1.21
1.43
1.88
0.891
0.951
1.76
1.58
1.38
1.37
6.95
3.43
1.35
1.35
14.7
5.65
GCPV (LA)
GCPV (MCMC)
GP-EXP
GARCH
2.43
2.39
2.52
2.83
3.00
3.00
3.20
3.03
3.08
3.08
3.46
3.12
3.17
3.17
5.14
3.32
?103
DMGBP
?10?9
TRIG
JUMP
DMGBP
20
DMGBP
0.015
600
Probability Density
3
1
Volatility
Volatility
Volatility
15
2
10
0.01
0.005
5
0
0
1
2
Time
(a)
3
4
0
0
2
4
0
6
Time
(b)
0
200
400
Days
(c)
600
400
200
0
0
0.005
?
(d)
0.01
Figure 1: Predicting volatility and learning its marginal pdf. For a) and b), the true volatility, and GCPV
(MCMC), GCPV (LA), GP-EXP, and GARCH predictions, are shown respectively by a thick green line, a
dashed thick blue line, a dashed black line, a cyan line, and a red line. a) shows predictions of historical
volatility for TRIG, where the shade is a 95% confidence interval about GCPV (MCMC) predictions. b) shows
predictions of historical volatility for JUMP. In c), a black line and a dashed red line respectively show GCPV
(LA) and GARCH one day ahead volatility forecasts for DMGBP. In d), a black line and a dashed blue line
respectively show the GCPV learned marginal pdf of ?t in DMGBP and a Gamma(4.15,0.00045) pdf.
volatilities are marginally Gaussian distributed. Indeed, for the DMGBP data, we trained the warping
function g over a 120 day period, and mapped its inverse through the univariate standard Gaussian
cdf ?, and differenced, to estimate the marginal probability density function (pdf) of ?t over this
period. The learned marginal pdf, shown in Figure 1d), is similar to a Gamma(4.15,0.00045) distribution. However, in using a rolling window to retrain the parameters of g, we do not assume that the
marginals of ?t are stationary; we have a time changing warping function.
While GARCH is successful, and its simplicity is attractive, our model is also simple and has a
number of advantages. We can effortlessly handle missing data, we can easily incorporate covariates
other than time (like interest rates) in our covariance function, and we can choose from a rich class
of covariance functions ? squared exponential, Brownian motion, Mat?ern, periodic, etc. In fact, the
volatility of high frequency intradaily returns on equity indices and currency exchanges is cyclical
[23], and GCPV with a periodic covariance function is uniquely well suited to this data. And the
parameters of GCPV, like the covariance function lengthscale, or the learned warping function,
provide insight into the underlying source of volatility, unlike the parameters of GARCH.
Finally, copulas are rapidly becoming popular in applications, but often only bivariate copulas are
being used. With our copula process one can learn the dependencies between arbitrarily many random variables independently of their marginal distributions. We hope the Gaussian Copula Process
Volatility model will encourage other applications of copula processes. More generally, we hope
our work will help bring together the machine learning and econometrics communities.
Acknowledgments: Thanks to Carl Edward Rasmussen and Ferenc Husz?ar for helpful conversations. AGW is supported by an NSERC grant.
8
References
[1] Paul Embrechts, Alexander McNeil, and Daniel Straumann. Correlation and dependence in risk management: Properties and pitfalls. In Risk Management: Value at risk and beyond, pages 176?223. Cambridge
University Press, 1999.
[2] David X. Li. On default correlation: A copula function approach. Journal of Fixed Income, 9(4):43?54,
2000.
[3] Roger B. Nelsen. An Introduction to Copulas. Springer Series in Statistics, second edition, 2006.
[4] Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31
(3):307?327, 1986.
[5] Ser-Huang Poon and Clive W.J. Granger. Practical issues in forecasting volatility. Financial Analysts
Journal, 61(1):45?56, 2005.
[6] Peter Reinhard Hansen and Asger Lunde. A forecast comparison of volatility models: Does anything
beat a GARCH(1,1). Journal of Applied Econometrics, 20(7):873?889, 2005.
[7] Christian T. Brownlees, Robert F. Engle, and Bryan T. Kelly. A practical guide to volatility forecasting
through calm and storm, 2009. Available at SSRN: http://ssrn.com/abstract=1502915.
[8] T. Bollerslev and E. Ghysels. Periodic autoregressive conditional heteroscedasticity. Journal of Business
and Economic Statistics, 14:139?151, 1996.
[9] B.D. McCullough and C.G. Renfro. Benchmarks and software standards: A case study of GARCH
procedures. Journal of Economic and Social Measurement, 25:59?71, 1998.
[10] C. Brooks, S.P. Burke, and G. Persand. Benchmarks and the accuracy of GARCH model estimation.
International Journal of Forecasting, 17:45?56, 2001.
[11] Carl Edward Rasmussen and Christopher K.I. Williams. Gaussian processes for Machine Learning. The
MIT Press, 2006.
[12] Abe Sklar. Fonctions de r?epartition a` n dimensions et leurs marges. Publ. Inst. Statist. Univ. Paris, 8:
229?231, 1959.
[13] P Deheuvels. Caract?eisation compl`ete des lois extr?emes multivari?es et de la convergence des types
extr?emes. Publications de l?Institut de Statistique de l?Universit?e de Paris, 23:1?36, 1978.
[14] G Kimeldorf and A Sampson. Uniform representations of bivariate distributions. Communications in
Statistics, 4:617?627, 1982.
[15] Edward Snelson, Carl Edward Rasmussen, and Zoubin Ghahramani. Warped Gaussian Processes. In
NIPS, 2003.
[16] Sebastian Jaimungal and Eddie K.H. Ng. Kernel-based Copula processes. In ECML PKDD, 2009.
[17] A. Rodr??guez, D.B. Dunson, and A.E. Gelfand. Latent stick-breaking processes. Journal of the American
Statistical Association, 105(490):647?659, 2010.
[18] Jarno Vanhatalo, Pasi Jylanki, and Aki Vehtari. Gaussian process regression with Student-t likelihood. In
NIPS, 2009.
[19] Iain Murray, Ryan Prescott Adams, and David J.C. MacKay. Elliptical Slice Sampling. In AISTATS,
2010.
[20] Paul W. Goldberg, Christopher K.I. Williams, and Christopher M. Bishop. Regression with inputdependent noise: A Gaussian process treatment. In NIPS, 1998.
[21] Carl Edward Rasmussen. The Infinite Gaussian Mixture Model. In NIPS, 2000.
[22] Ruey S. Tsay. Analysis of Financial Time Series. John Wiley & Sons, 2002.
[23] Torben G. Andersen and Tim Bollerslev. Intraday periodicity and volatility persistence in financial markets. Journal of Empirical Finance, 4(2-3):115?158, 1997.
9
| 4082 |@word version:2 replicate:1 bf:2 open:1 km:1 vanhatalo:2 simulation:4 eng:2 covariance:16 decomposition:1 fifteen:1 series:3 contains:1 daniel:1 past:2 outperforms:2 current:1 ka:1 comparing:2 elliptical:2 com:2 torben:1 guez:2 must:1 john:1 fn:8 numerical:1 happen:1 christian:1 designed:1 update:8 stationary:2 leaf:2 beginning:2 prize:1 short:1 harmonize:1 five:1 popularised:1 become:2 beta:5 shorthand:1 introductory:1 introduce:2 market:2 indeed:2 pkdd:1 pitfall:1 window:2 becomes:1 underlying:4 moreover:1 maximizes:2 panel:4 kimeldorf:1 what:5 substantially:1 developed:1 caution:1 unobserved:2 finding:3 transformation:2 warning:1 every:4 ti:4 concave:1 finance:1 universit:1 clive:1 ser:1 uk:3 normally:1 grant:1 stick:1 yn:2 arguably:2 overestimate:1 before:1 t1:1 engineering:2 local:1 modify:1 tends:2 limit:1 positive:2 analyzing:1 fluctuation:1 becoming:1 ap:4 might:1 burn:1 black:3 studied:1 conversely:1 suggests:3 heteroscedastic:3 co:1 limited:1 cdfs:1 range:3 unique:2 practical:3 acknowledgment:1 practice:1 definite:1 implement:1 writes:1 procedure:2 empirical:1 convenient:2 lois:1 word:2 integrating:1 confidence:1 extr:2 statistique:1 zoubin:3 prescott:1 get:4 persistence:1 close:1 put:1 risk:4 context:1 instability:1 unsurpassed:1 www:1 map:3 deterministic:1 missing:2 maximizing:1 yt:1 williams:4 economics:1 regardless:2 independently:3 britain:1 starting:1 simplicity:1 immediately:1 insight:1 iain:1 financial:9 stability:1 handle:2 laplace:16 imagine:2 suppose:2 play:1 pt:3 exact:1 carl:4 us:3 goldberg:2 velocity:2 element:1 econometrics:6 observed:3 role:1 ft:2 capture:2 region:1 decrease:3 vehtari:1 intuition:1 ui:1 covariates:2 cam:3 ultimately:1 trained:5 depend:1 ferenc:1 heteroscedasticity:2 f2:7 easily:2 joint:4 stock:1 sklar:3 train:1 univ:1 fast:1 describe:2 effective:1 monte:4 lengthscale:4 choosing:1 encoded:2 supplementary:1 gelfand:1 drawing:1 otherwise:1 reconstruct:1 statistic:4 cov:1 gp:17 transform:4 itself:2 jointly:1 final:1 sequence:5 differentiable:1 advantage:2 intraday:1 product:2 aligned:1 rapidly:1 poon:1 flexibility:1 bollerslev:3 convergence:2 optimum:2 requirement:1 assessing:1 nelsen:3 perfect:2 adam:1 volatility:50 help:1 andrew:2 ac:3 develop:3 tim:2 ij:4 qt:1 edward:5 predicted:1 involves:1 trading:1 safe:1 undertaking:1 thick:2 stochastic:8 subsequently:1 material:1 exchange:6 behaviour:2 f1:8 ryan:1 hold:1 effortlessly:1 burke:1 credit:1 exp:10 great:1 mapping:2 predict:7 bj:4 substituting:1 vary:1 a2:1 smallest:1 earth:3 purpose:1 estimation:2 hansen:1 pasi:1 hope:2 mit:1 gaussian:56 husz:1 avoid:1 lifted:1 varying:2 wilson:1 publication:1 corollary:1 derived:2 jylanki:1 rank:2 likelihood:15 modelling:1 helpful:1 inference:5 inst:1 typically:2 a0:2 transformed:1 quasi:3 interested:1 issue:1 rodr:2 classification:1 flexible:2 overall:2 uncovering:1 constrained:1 copula:56 initialize:3 marginal:28 equal:1 once:5 construct:1 jarno:1 ng:1 sampling:6 manually:1 t2:3 recommend:1 gordon:1 ete:1 gamma:3 ve:1 interpolate:1 suit:1 interest:1 evaluation:1 mixture:6 tj:1 chain:4 encourage:1 daily:2 institut:1 indexed:2 taylor:1 logarithm:2 desired:2 kij:1 instance:1 ar:1 measuring:1 maximization:1 cost:1 deviation:10 entry:2 rare:1 rolling:2 uniform:9 predictor:1 leurs:1 successful:1 dependency:18 varies:1 periodic:3 thanks:1 density:2 peak:2 international:1 together:1 quickly:1 trivariate:1 andersen:1 again:1 central:1 squared:7 opposed:1 choose:2 slowly:2 management:2 huang:1 warped:2 derivative:1 american:1 return:10 li:2 potential:1 de:8 student:2 coefficient:1 trough:2 rocket:5 ranking:3 later:1 try:1 jaimungal:1 doing:5 observing:1 red:2 competitive:3 sort:1 start:1 formed:1 accuracy:2 variance:10 qk:1 correspond:1 inputdependent:1 modelled:2 bayesian:2 accurately:1 marginally:1 carlo:4 converged:1 suffers:1 sebastian:1 mlg:1 definition:3 underestimate:1 pp:1 frequency:2 storm:1 dm:2 proof:1 treatment:2 popular:2 conversation:1 cj:1 subtle:1 amplitude:1 sophisticated:1 back:1 day:10 follow:2 methodology:1 evaluated:1 pound:1 strongly:1 roger:1 arch:1 correlation:7 until:2 christopher:3 assessment:2 aj:3 gti:3 true:7 nonzero:1 attractive:1 sin:1 uniquely:3 aki:1 anything:1 won:1 unnormalized:1 generalized:2 pdf:5 qti:2 tn:2 cp:1 trig:6 motion:1 bring:1 snelson:2 recently:3 fi:10 common:1 physical:1 conditioning:1 association:1 marginals:7 numerically:2 measurement:8 mellon:1 refer:2 cambridge:3 gibbs:1 ai:4 fonctions:1 similarly:1 zeroing:1 had:1 pq:1 specification:1 stable:3 operating:1 gt:3 base:2 add:1 etc:1 heteroskedasticity:1 multivariate:3 posterior:3 recent:2 brownian:1 optimizing:1 inf:1 certain:1 arbitrarily:6 discussing:1 ht1:1 garch:33 seen:1 greater:4 relaxed:1 determine:1 maximize:2 period:4 converge:1 dashed:4 currency:4 full:1 infer:2 smooth:1 af:2 long:1 a1:5 prediction:20 variant:3 regression:4 circumstance:1 expectation:1 df:3 iteration:3 kernel:2 want:2 interval:1 source:1 unlike:4 comment:1 subject:1 supposing:2 call:6 near:1 iterate:1 fit:4 competing:1 economic:3 knowing:1 engle:3 t0:6 tsay:1 expression:2 forecasting:5 peter:1 hessian:1 cause:1 matlab:1 dramatically:1 useful:3 generally:2 reinhard:1 statist:1 http:2 generate:2 outperform:3 specifies:1 problematic:1 delta:1 estimated:1 bryan:1 blue:2 carnegie:1 write:1 hyperparameter:2 mat:1 express:1 nevertheless:1 changing:1 tenth:1 kqk:2 mcneil:1 inverse:5 jitter:1 respond:1 place:1 draw:6 comparable:2 bound:1 cyan:1 guaranteed:1 quadratic:1 adapted:1 ahead:4 precisely:2 kronecker:1 x2:3 n3:2 software:1 dominated:1 u1:11 simulate:2 speed:1 extremely:3 emes:2 ssrn:2 ern:1 department:3 ruey:1 alternate:1 conjugate:1 describes:2 smaller:1 em:2 son:1 modification:1 intuitively:1 equation:1 previously:1 granger:1 needed:1 know:1 letting:1 whichever:1 available:1 operation:4 gaussians:5 observe:1 away:1 calm:1 alternative:1 cf:2 newton:7 ghahramani:2 especially:5 murray:1 warping:15 objective:3 added:1 quantity:1 dependence:5 usual:1 diagonal:1 rt:1 gradient:1 distance:3 separate:3 mapped:1 simulated:1 separating:1 parametrized:3 epartition:1 nobel:1 reason:1 assuming:1 analyst:1 index:3 reformulate:1 equivalently:1 optionally:1 unfortunately:1 difficult:2 robert:2 mostly:1 relate:1 expense:1 dunson:1 negative:4 implementation:1 publ:1 unknown:1 observation:18 markov:4 benchmark:3 finite:3 descent:1 ecml:1 january:2 incorrectly:1 situation:1 beat:1 communication:1 y1:2 arbitrary:1 community:1 abe:1 david:2 paris:2 specified:1 extensive:2 toolbox:1 differenced:1 gbp:2 learned:8 qkb:1 nip:4 brook:1 beyond:1 alongside:1 below:1 reliable:1 green:1 erratic:1 suitable:1 business:1 predicting:3 wanting:1 normality:1 improve:1 axis:1 coupled:1 review:2 prior:4 l2:1 literature:1 inaccurately:1 kelly:1 asymptotic:1 integrate:1 proxy:2 principle:1 periodicity:1 supported:1 rasmussen:6 free:1 guide:1 understand:1 taking:1 differentiating:1 absolute:1 distributed:7 slice:3 dimension:1 default:2 xn:6 calculated:1 cumulative:1 rich:2 fb:2 ending:1 collection:3 jump:7 exponentiating:1 autoregressive:2 historical:12 income:1 social:1 correlate:2 compl:1 approximate:13 dealing:1 assumed:3 xi:2 eddie:1 un:9 latent:7 continuous:2 table:4 learn:6 robust:1 correlated:1 expansion:1 mse:5 aistats:1 pk:1 dense:1 noise:2 hyperparameters:2 paul:2 edition:1 x1:6 retrain:3 fashion:1 wiley:1 inferring:1 wish:3 exponential:5 infancy:1 breaking:1 minute:2 theorem:3 shade:1 specific:1 discarding:1 brownlees:4 showing:1 bishop:1 bivariate:7 exists:1 intractable:1 importance:1 supplement:1 conditioned:1 forecast:10 suited:3 simply:2 univariate:9 infinitely:1 expressed:3 ordered:1 nserc:1 cyclical:1 u2:8 vulnerable:1 monotonic:2 springer:1 truth:2 mackay:1 cdf:11 reassuring:1 conditional:2 goal:1 towards:1 sampson:1 change:2 determined:1 specifically:2 except:3 uniformly:1 wt:8 infinite:2 called:3 pas:1 e:1 datastream:1 equity:3 la:6 formally:2 cholesky:3 embrechts:1 assessed:1 alexander:1 incorporate:2 evaluate:4 mcmc:10 marge:1 ex:2 |
3,405 | 4,083 | Batch Bayesian Optimization
via Simulation Matching
Javad Azimi, Alan Fern, Xiaoli Z. Fern
School of EECS, Oregon State University
{azimi, afern, xfern}@eecs.oregonstate.edu
Abstract
Bayesian optimization methods are often used to optimize unknown functions that
are costly to evaluate. Typically, these methods sequentially select inputs to be
evaluated one at a time based on a posterior over the unknown function that is
updated after each evaluation. In many applications, however, it is desirable to
perform multiple evaluations in parallel, which requires selecting batches of multiple inputs to evaluate at once. In this paper, we propose a novel approach to
batch Bayesian optimization, providing a policy for selecting batches of inputs
with the goal of optimizing the function as efficiently as possible. The key idea is
to exploit the availability of high-quality and efficient sequential policies, by using
Monte-Carlo simulation to select input batches that closely match their expected
behavior. Our experimental results on six benchmarks show that the proposed approach significantly outperforms two baselines and can lead to large advantages
over a top sequential approach in terms of performance per unit time.
1
Introduction
We consider the problem of maximizing an unknown function f (x) when each evaluation of the
function has a high cost. In such cases, standard optimization techniques such as empirical gradient
methods are not practical due to the high number of function evaluations that they demand. Rather,
Bayesian optimization (BO) methods [12, 4] have demonstrated significant promise in their ability
to effectively optimize a function given only a small number of evaluations. BO gains this efficiency
by leveraging Bayesian models that take into account all previously observed evaluations in order
to better inform future evaluation choices. In particular, typical BO methods continually maintain a
posterior over f (x) that is used to select the next input to evaluate. The result of the evaluation is
then used to update the posterior and the process repeats. There are a number of well established
policies for selecting the next input to evaluate given the current posterior. We will refer to such
policies as sequential policies to stress the fact that they select one input at a time.
In many applications it is possible and desirable to run multiple function evaluations in parallel.
This is the case, for example, when the underlying function corresponds to a controlled laboratory
experiment where multiple experimental setups are examined simultaneously, or when the underlying function is the result of a costly computer simulation and multiple simulations can be run across
different processors in parallel. In such cases, existing sequential policies are not sufficient. Rather,
batch mode BO is more appropriate, where policies select a batch of multiple inputs to be evaluated
at once. To the best of our knowledge and as noted in [4], there is no established work on BO that
considers the batch selection problem, except for a brief treatment in [21]. The main contribution of
this work is to propose an approach to batch BO and to demonstrate its effectiveness.
The key motivation behind our approach comes from the fact that the sequential mode of BO has a
fundamental advantage over BO in batch mode. This is because in sequential mode, each function
evaluation is immediately used to obtain a more accurate posterior of f (x), which in turn will allow
1
a selection policy to make more informed choices about the next input. Given an effective sequential
selection policy, our goal is then to design a batch policy that approximates its behavior.
In particular, our batch policy attempts to select a batch that ?matches? the expected behavior of a
sequential policy as closely as possible. The approach generates Monte-Carlo simulations of a sequential policy given the current posterior, and then derives an optimization problem over possible
batches aimed at minimizing the loss between the sequential policy and the batch. We consider two
variants of this optimization problem that yield a continuous weighted k-means problem and a combinatorial weighted k-medoid problem. We solve the k-means variant via k-means clustering and
show that the k-medoid variant corresponds to minimizing a non-increasing supermodular function,
for which there is an efficient approximation algorithm [9].
We evaluate our approach on a collection of six functions and compare it to random and another
baseline batch policy based on submodular maximization. The results show that our approach significantly outperforms these baselines and can lead to large advantages over a top sequential approach
in terms of performance per unit time.
2
Problem Setup
Let X ? Rn be an n-dimensional input space, where we will often refer to elements of X as an
experiment and assume that each dimension i is bounded in [Ai , Bi ]. We assume an unknown realvalued function f : X ? R, which represents the expected value of the dependent variable after
running an experiment. For example, f (x) might correspond to the result of a wet-lab experiment
or a computer simulation with input parameters x. Conducting an experiment x produces a noisy
outcome y = f (x) + , where is a noise term that might be 0 in some applications.
Our objective is to find an experiment x ? X that approximately maximizes f by requesting a
limited number of experiments and observing their outcomes. Furthermore we are interested in
applications where (1) running experiments is costly (e.g. in terms of laboratory or simulation time);
and (2) it is desirable to run k > 1 experiments in parallel. This motivates the problem of selecting
a sequence of batches, each containing k experiments, where the choice of a batch can depend on
the results observed from all previous experiments. We will refer to the rule for selecting a batch
based on previous experiments as the batch policy. The main goal of this paper is to develop a batch
policy that optimizes the unknown function as efficiently as possible.
Due to the high cost of experiments, traditional optimization techniques such as empirical gradient
ascent are not practical for our setting, due to their high demands on the number of experiments.
Rather, we build on Bayesian optimization (BO) [10, 12, 4], which leverages Bayesian modeling
in an attempt to achieve more efficient optimization. In particular, BO maintains a posterior over
the unknown function based on previously observed experiments, e.g. represented via a Gaussian
Process (GP) [19]. This posterior is used to select the next experiment to be run in a way that attempts
to trade-off exploring new parts of the experimental space and exploiting parts that look promising.
While the BO literature has provided a number of effective policies, they are all sequential policies,
where only a single experiment is selected and run at a time. Thus, the main novelty of our work is
in defining a batch policy in the context of BO, which is described in the next section.
3
Simulation Matching for Batch Selection
Given a data set D of previously observed experiments, which induces a posterior distribution over
the unknown function, we now consider how to select the next batch of k experiments. A key issue in
making this choice is to manage the trade-off between exploration and exploitation. The policy must
attempt to explore by requesting experiments from unexplored parts of the input space, at the same
time also attempt to optimize the unknown function via experiments that look promising given the
current data. While, under most measures, optimizing this trade-off is computationally intractable,
there are a number of heuristic sequential policies from the BO literature that are computationally
efficient and perform very well in practice. For example, one such policy selects the next experiment
to be the one that has the ?maximum expected improvement? according to the current posterior
[14, 10]. The main idea behind our approach is to leverage such sequential policies by selecting a
batch of k > 1 experiments that ?closely matches? the sequential policy?s expected behavior.
More formally, let ? be a sequential policy. Given a data set D of prior experimental results, ? returns
the next experiment x ? X to be selected. As is standard in BO, we assume we have a posterior
2
density P (f | D) over the unknown function f , such as a Gaussian Process. Given this density we
can define a density over the outcomes of executing policy ? for k steps, each outcome consisting
of a set of k selected experiments. Let S?k be the random variable denoting the set of k experiments
resulting from such k-step executions, which has a well defined density over all possible sets given
the posterior of f . Importantly, it is generally straightforward to use Monte Carlo simulation to
sample values of S?k .1 Our batch policy is based on generating a number of samples of S?k , which
are used to define an objective for optimizing a batch of k experiments. Below we describe this
objective and a variant, followed by a description of how we optimize the proposed objectives.
3.1 Batch Objective Function
Our goal is to select a batch B of k experiments that best ?matches the expected behavior? of a base
sequential policy ? conditioned on the observed data D. More precisely, we consider a batch B to
be a good match for a policy execution if B contains an experiment that is close to the best of the k
experiments selected by the policy. To specify this objective we first introduce some notation. Given
a function f and a set of experiments S, we define x? (f, S) = arg maxx?S f (x) to be the maximizer
of f in S. Also, for any experiment x and set B we define nn(x, B) = arg minx0 ?B k x ? x0 k to
be the nearest neighbor of x in set B. Our objective can now be written as selecting a batch B that
minimizes
OBJ(B) = ES?k Ef |S?k k x? (f, S?k ) ? nn(x? (f, S?k ), B) k2 | D | D .
Note that this nested expectation is the result of decomposing the joint posterior over S?k and f as
P (f, S?k | D) = P (f | S?k , D) ? P (S?k | D). If we assume that the unknown function f (x) is
Lipschitz continuous then minimizing this objective can be viewed as minimizing an upper bound
on the expected performance difference between the sequential policy and the selected batch. Here
the performance of a policy or a batch is equal to the output value of the best selected experiment.
We will approximate this objective by replacing the outer expectation over S?k with a sample average
over n samples {S1 , . . . , Sn } of S?k as follows, recalling that each Si is a set of k experiments:
1X
OBJ(B) ?
Ef |Si k x? (f, Si ) ? nn(x? (f, Si ), B) k2 | D
n i
1XX
=
Pr(x = x? (f, Si ) | D, Si )? k x ? nn(x, B) k2
n i
x?Si
1XX
=
?i,x ? k x ? nn(x, B) k2
(1)
n i
x?Si
The second step follows by noting that x? (f, Si ) must be one of the k experiments in Si .
We now define our objective as minimizing (1) over batch B. The objective corresponds to a
weighted k-means clustering problem, where we must select B to minimize the weighted distortion between the simulated points and their closest points in B. The weight on each simulated
experiment ?i,x corresponds to the probability that the experiment x ? Si achieves the maximum
value of the unknown f among the experiments in Si , conditioned on D and the fact that S?k = Si .
We refer to this objective as the k-means objective.
We also consider a variant of this objective where the goal is to find a B that minimizes
(1) under
S
the constraint that B is restricted to experiments in the simulations, i.e. B ? i Si s.t. |B| = k.
This objective corresponds to the weighted k-medoid clustering problem, which is often considered
to improve robustness to outliers in clustering. Accordingly we will refer to this objective as the
k-medoid objective and note that given a fixed set of simulations this corresponds to a discrete
optimization problem.
3.2 Optimization Approach
The above k-means and k-medoid objectives involve the weights ?i,x = P (x = x?i (f ) | D, S?k =
Si ), for each x ? Si . In general these weights will be difficult to compute exactly, particularly
1
For example, this can be done by starting with D and selecting the first experiment x1 using ? and then
using P (f | D) to simulate the result y1 of experiment x1 . This simulated experiment is added to D and the
process repeats for k ? 1 additional experiments.
3
Algorithm 1 Greedy Weighted k-Medoid Algorithm
Input:S = {(x1 , w1 ), . . . , (xm , wm )}, k
Output:B
B ? {x1 , . . . , xm } // initialize batch to all data points
while |B| > k do P
m
x ? arg minx?B j=1 wj ? k xj ? nn(xj , B \ x) k // point that influences objective the least
B ?B\x
end while
return B
due to the conditioning on the set Si . In this work, we approximate those weights by dropping the
conditioning on Si , for which it is then possible to derive a closed form when the posterior over f is
represented as a Gaussian Process (GP). We have found that this approach leads to good empirical
performance. In particular, instead of using the weights ?i,x we use the weights ?
? i,x = P (x =
x?i (f ) | D). When the posterior over f is represented as a GP, as in our experiments, the joint
distribution over experimental outcomes in Si = {xi,1 , . . . , xi,k } is normally distributed. That is,
the random vector hf (xi,1 ), . . . , f (xi,k )i ? N (?, ?), where the mean ? and covariance ? have
standard closed forms given by the GP conditioned on D. From this, it is clear that for a GP the
computation of ?
? i,x is equivalent to computing the probability that the ith component of a normally
distributed vector is larger than the other components. A closed form solution for this probability is
given by the following proposition.
Proposition 1. If (y1 , y2 , . . . , yk ) ? N ?y , ?y then for any i ? {1, . . . , k},
P (yi ? y1 , yi ? y2 , . . . , yi ? yk ) =
k?1
Y
(1 ? ?(??j ))
(2)
j=1
? 1
where ?(.) is standard normal cdf, ? = (?1 , ?2 , ? ? ?, ?k?1 ) = A?y A0 2 A?y , such that A ?
R(k?1)?k is a sparse matrix that for any j = 1, 2, ? ? ?, k ? 1 we have Aj,i = 1, and for any 1 ? p < i
we have Ap,p = ?1 , and for any i < p ? k we have Ap?1,p = ?1.
Using this approach to compute the weights we can now consider optimizing the k-means and kmedoid objectives from (1), both of which are known to be NP-hard problems. For the k-means
objective we solveSforSthe set B by simply applying the k-means clustering algorithm [13] to the
weighted data set i x?Si {(x, ?
? i,x )}. The k cluster centers are returned as our batch B.
The k-medoid objective is well known [22] and the weighted k-medoid clustering algorithm [11]
has been shown to perform well and be robust to outliers in the data. While we have experimented
with this algorithm and obtained good results, we have achieved results that are as good or better
using an alternative greedy algorithm that provides certain approximation guarantees. Pseudo-code
for this algorithm is shown in Figure 1. The input to the algorithm is the set of weighted experiments
and the batch size k. The algorithm initializes the batch B to include all of the input experiments,
which achieves the minimum objective value of zero. The algorithm then iteratively removes one
experiment from B at a time until |B| = k, each time removing the element whose removal results
in the smallest increase in the k-medoid objective.
This greedy algorithm is motivated by theoretical results on the minimization of non-increasing,
supermodular set functions.
Definition 1. Suppose S is a finite set, f : 2S ? R+ is a supermodular set function if for all
B1 ? B2 ? S and {x} ? S \ B2 , it holds that f (B1 ) ? f (B1 ? {x}) ? f (B2 ) ? f (B2 ? {x}).
Thus, a set function is supermodular if adding an element to a smaller set provides no less improvement than adding the element to a larger set. Also, a set function is non-increasing if for any set
S and element x if f (S) ? f (S ? {x}). It can be shown that our k-medoid objective function of
(1) is bothSa non-increasing and supermodular function of B and achieves a minimum value of zero
for B = i Si . It follows that we can obtain an approximation guarantee for the described greedy
algorithm in [9].
4
Theorem 1. [9] Let f be a monotonic non-increasing supermodular function over subsets of the
finite set S, |S| = m and f (S) = 0. Let B be the set of the elements returned by the greedy
algorithm 1 s.t |B| = k, q = m ? k and B ? = argminB 0 ?S,|B 0 |=k f (B 0 ), then
q
1
q+t
et ? 1
f (B) ?
? 1 f (B ? ) ?
f (B ? )
(3)
t
q
t
where t is the steepness parameter [9] of function f .
Notice that the approximation bound involves the steepness parameter t of f , which characterizes
the rate of decrease of f . This is unavoidable since it is known that achieving a constant factor
approximation guarantee is not possible unless P=NP [17]. Further this bound has been shown to be
tight for any t [9]. Note that this is in contrast to guarantees for greedy maximization of submodular
functions [7] for which there are constant factor guarantees. Also note that the greedy algorithm
we use is qualitatively different from the one used for submodular maximization, since it greedily
removes elements from B rather than greedily adding elements to B.
4
Implementation Details and Baselines
GP Posterior. Our batch selection approach described above requires that we maintain a posterior
over the unknown function f . For this purpose we use a zero-mean GP prior with a zero-mean
Gaussian noise model with variance equal to 0.01. The GP covariance is specified by a Gaussian
1
2
kernel K(x, x0 ) = ? exp ? 2w
where ymax is the
k x ? x0 k2 , with signal variance ? = ymax
maximum value of the unknown function. In all of our experiments we used a simple rule of thumb
Pd
to set the kernel width w to 0.01 i=1 li where li is the input space length in dimension i. We have
found this rule to work well for a variety of problems. An alternative would be to use a validationbased approach for selecting the kernel parameters. In the BO setting, however, we have found this
to be unreliable since the number of data points is relatively small.
Base Sequential Policy. Our batch selection approach also requires a base sequential policy ? to be
used for simulation matching. This policy must be able to select the next experiment given any set
of prior experimental observations D. In our experiments, we use a policy based on the Maximum
Expected Improvement (MEI) heuristic [14, 10] which is a very successful sequential policy for BO
and has been shown to converge in the limit to the global optimum. Given data D the MEI policy
simply selects the next experiment to be the one that maximizes the expected improvement over the
current set of experiments with respect to maximizing the unknown function. More formally, let y ?
be the value of the best/largest experimental outcome observed so far in D. The MEI value of an
experiment x is given by MEI(x) = Ef [max{f (x) ? y ? , 0} | D]. For our GP posterior over f we
?
??(x)
can derive a closed form for this given by: u = y ?(x)
where y ? is our best currently observed
value. For any given example x, the MEI can be computed as follows:
MEI(x)
=
?(x) [?u?(?u) + ?(u)] ,
u=
y ? ? ?(x)
?(x)
where ? and ? are the standard normal cumulative distribution and density functions and ?(x) and
?(x) are the mean and variance of f (x) according to the GP given D, which have simple closed
forms. Note that we have also evaluated our simulation-matching approach with an alternative
sequential policy known as Maximum Probability of Improvement [16, 10]. The results (not shown
in this paper) are similar to those obtained from MEI, showing that our general approach works well
for different base policies.
The computation of the MEI policy requires maximizing MEI(x) over the input space X . In general, this function does not have a unique local maximum and various strategies have been tried for
maximizing it. In our experiments, we (approximately) maximize the MEI function using the DIRECT black-box optimization procedure, which has shown good optimization performance as well
as computational efficiency in practice.
Baseline Batch Policies. To the best of our knowledge there is no well-known batch policy for
Bayesian optimization. However, in our experiments we will compare against two baselines. The
first baseline is random selection, where a batch of k random experiments is returned at each step. Interestingly, in the case of batch active learning for classification, the random batch selection strategy
5
Function
Cosines
Rosenbrock
Michalewicz
Table 1: Benchmark Functions.
Mathematical representation
1 ? (u2 + v 2 ? 0.3cos(3?u) ? 0.3cos(3?v))
u = 1.6x ? 0.5, v = 1.6y ? 0.5
10 ? 100(y ? x2 )2 ? (1 ? x)2
2 20
P5
i.x
? i=1 sin(xi ). sin ? i
has been surprisingly effective and is often difficult to outperform with more sophisticated strategies
[8]. However, as our experiments will show, our approach will dominate random.
Our second, more sophisticate, baseline is based on selecting a batch of experiments whose expected
maximum output is the largest. More formally, we consider selecting a size k batch B that maximizes the objective Ef [maxx?B f (x) | D], which we will refer to as the EMAX objective. For
our GP prior, each set B = {x1 , . . . , xk } can be viewed as defining a normally distributed vector hf (x1 ), . . . , f (xk )i ? N (?, ?). Even in this case, finding the optimal set B is known to be
NP-hard. However, for the case where f is assumed to be non-negative, the EMAX objective is
a non-negative, submodular, non-decreasing function of B. Together these properties imply that a
simple greedy algorithm can achieve an approximation ratio of 1 ? e?1 [7]. The algorithm starts
with an empty B and greedily adds experiments to B, each time selecting the one that improves the
EMAX objective the most. Unfortunately, in general there is no closed form solution for evaluating
the EMAX objective, even in our case of normally distributed vectors [20]. Therefore, to implement the greedy algorithm, which requires many evaluations of the EMAX objective, we use Monte
Carlo sampling, where for a given set B we sample the corresponding normally distributed vector
and average the maximum values across the samples.
5
Experimental Results
In this section we evaluate our proposed batch BO approach and the baseline approaches on six
different benchmarks.
5.1 Benchmark Functions
We consider three well-known synthetic benchmark functions: Cosines and Rosenbrock [1, 5],
which are over [0, 1]2 , and Michalewicz [15], which is over [0, ?]5 . Table 1 gives the formulas
for each of these functions. Two additional benchmark functions Hydrogen and FuelCell, which
range over [0, 1]2 , are derived from real-world experimental data sets. In both cases, the benchmark function was created by fitting regression models to data sets resulting from real experiments.
The Hydrogen data set is the result of data collected as part of a study on biosolar hydrogen production [6], where the goal was to maximize the hydrogen production of a particular bacteria by
optimizing the PH and Nitrogen levels of the growth medium. The FuelCell data set was collected
as part of a study investigating the influence of anodes? nano-structure on the power output of microbial fuel cells [3]. The experimental inputs include the average area and average circularity of
the nano-particles [18]. Contour plots of the four 2-d functions are shown in Figure 1.
The last benchmark function is derived from the Cart-Pole [2] problem, which is a commonly used
reinforcement learning problem. The goal is to optimize the parameters of a controller for a wheeled
cart with the objective of balancing a pole. The controller is parameterized by four parameters
giving a 4-d space of experiments in [1, ?1]4 . Given a setting for these parameters, the benchmark
function is implemented by using the standard Cart-Pole simulator to return the reward received for
the controller.
5.2 Results
Figures 2 and 3 show the performance of our methods on all six benchmark functions for batch sizes
5 and 10 respectively. Each graph contains 5 curves, each corresponding to a different BO approach
(see below). Each curve is the result of taking an average of 100 independent runs. The x-axis of
each graph represents the total number of experiments and the y-axis represents the regret values,
where the regret of a policy at a particular point is the difference between the best possible output
value (or an upper bound if the value is not known) and the best value found by the policy. Hence the
regret is always positive and smaller values are preferred. Each run of a policy initializes the data set
to contain 5 randomly selected experiments for the 2-d functions and 20 random initial experiments
for the higher dimensional functions.
6
1
1
1
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
Fuel Cell
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0.1
0
0.1
Hydrogen
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0
0.1
Cosines
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Rosenbrock
Figure 1: The contour plots for the four 2?dimension proposed test functions.
0.65
0.3
Sequential
k?medoid
k?means
EMAX
Random
0.6
0.55
0.5
Sequential
k?medoid
k?means
EMAX
Random
0.6
0.5
0.45
0.4
0.35
Regret
0.2
Regret
Regret
Sequential
k?medoid
k?means
EMAX
Random
0.25
0.15
0.4
0.3
0.1
0.3
0.2
0.05
0.25
0.2
10
15
20
25
# of Experimets
30
0
10
35
15
20
25
# of Experimets
Fuel Cell
20
25
# of Experimets
0.35
Cosines
700
Regret
0.15
300
0.05
200
25
35
2.6
500
0.1
30
2.7
600
400
20
25
# of Experimets
Sequential
k?medoid
k?means
EMAX
Random
2.8
0.2
35
3
800
0.3
0.25
30
2.9
Regret
0.4
15
15
Hydrogen
Sequential
k?medoid
k?means
EMAX
Random
0.45
Regret
0.1
10
35
900
0.5
0
10
30
2.5
2.4
2.3
Sequential
k?medoid
k?means
EMAX
Random
50
2.2
2.1
75
Rosenbrock
100
125
# of Experimets
150
175
2
25
200
30
35
Cart-Pole
40
45
50
55
60
# of Experimets
65
70
75
80
Michalewicz
Figure 2: Performance evaluation with batch size 5.
0.25
0.65
Sequential
k?medoid
k?means
EMAX
Random
0.6
0.55
0.6
Sequential
k?medoid
k?means
EMAX
Random
0.2
0.5
0.45
0.4
0.15
Regret
Regret
Regret
0.5
0.45
0.25
0.3
0.05
0.2
0.25
0.15
20
25
# of Experimets
30
0
15
35
20
Fuel Cell
30
0.1
15
35
20
Hydrogen
Sequential
k?medoid
k?means
EMAX
Random
0.3
0.25
25
# of Experimets
30
35
Cosines
3
Sequential
k?medoid
k?means
EMAX
Random
2.9
800
2.8
700
2.7
600
0.15
500
0.1
400
0.05
300
20
25
# of Experimets
Rosenbrock
30
35
200
30
Regret
0.2
Regret
Regret
25
# of Experimets
900
0.35
0
15
0.4
0.35
0.3
0.1
0.35
0.2
15
Sequential
k?medoid
k?means
EMAX
Random
0.55
2.6
2.5
2.4
Sequential
k?medoid
k?means
EMAX
Random
60
2.3
2.2
90
120
# of Experimets
150
180
200
2.1
30
40
Cart-Pole
Figure 3: Performance evaluation with batch size 10.
7
50
60
# of Experimets
Michalewicz
70
80
Each graph gives curves for four batch approaches including our baselines Random and EMAX,
along with our proposed approaches based on the k-means and k-medoid objectives, which are
optimized by weighted k-means clustering and the greedy Algorithm 1 respectively. In addition, for
reference we plot the performance of the base Sequential MEI BO policy (k = 1) on each graph.
Note that since the batch approaches request either 5 or 10 experiments at a time, their curves only
contain data points at those intervals. For example, for the batch size 5 results the first point on a
batch curve corresponds to 10 experiments, including the initial 5 experiments and the first requested
batch. The next point on the batch curve is for 15 experiments which includes the next requested
batch and so on. Rather the Sequential policy has a point at every step since it requests experiments
one at a time. It is important to realize that we generally expect a good sequential policy to do better,
or no worse, than a batch policy with respect to performance per number of experiments. Thus, the
Sequential curve can be typically viewed as an upper performance bound and provides an indication
of how much loss is incurred when moving to a batch setting in terms of efficiency per experiment.
Comparison to Baselines. The major observation from our results is that for all benchmarks and
for both batch sizes the proposed k-means and k-medoid approaches significantly outperform the
baselines. This provides strong validation for our proposed simulation-matching approach to batch
selection.
k-means vs. k-medoid. In most cases, the k-means and k-medoid approaches perform similarly.
However, for both batch sizes k-medoid often does shows a small improvement over k-means and
appears to have a significant advantage in FuelCell. The only exception is in Hydrogen where kmeans shows a small advantage over k-medoid for small numbers of experiments. Overall, both
approaches appear to be effective and in these domains k-medoid has a slight edge.
Batch vs. Sequential. The advantage of Sequential over our batch approaches varies with the benchmark. However, in most cases, our proposed batch approaches catch up to Sequential in a relatively
small number of experiments and in some cases, the batch policies are similar to Sequential from
the start. The main exception is Cart-Pole for batch size 10, where the batch policies appear to be
significantly less efficient in terms of performance versus number of experiments. Generally, we see
that the difference between our batch policies and Sequential is larger for batch size 10 than batch
size 5, which is expected, since larger batch sizes imply that less information per experiment is used
in making decisions.
It is clear, however, that if we evaluate the performance of our batch policies in terms of experimental time, then there is a very significant advantage over Sequential. In particular, the amount of
experimental time for a policy is approximately equal to the number of requested batches, assuming
that the batch size is selected to allow for all selected experiments to be run in parallel. This means,
for example, that for the batch size 5 results, 5 time steps for the batch approaches correspond to
30 total experiments (5 initial + 5 batches). We can compare this point to the first point on the
Sequential curve, which also corresponds to 5 time steps (5 experiments beyond the initial 5). In all
cases, the batch policies yield a very large improvement in regret reduction per unit time, which is
the primary motivation for batch selection.
6
Summary and Future Work
In this paper we introduced a novel approach to batch BO based on the idea of simulation matching.
The key idea of our approach is to design batches of experiments that approximately match the
expected performance of high-quality sequential policies for BO. We considered two variants of
the matching problem and showed that both approaches significantly outperformed two baselines
including random batch selection on six benchmark functions. For future work we plan to consider
the general idea of simulation matching for other problems, such as active learning, where there are
also good sequential policies and batch selection is often warranted. In addition, we plan to consider
less myopic approaches for selecting each batch and the problem of batch size selection, where there
is a choice about batch size that must take into account the current data and experimental budget.
Acknowledgments
The authors acknowledge the support of the NSF under grants IIS-0905678.
8
References
[1] B. S. Anderson, A. W. More, and D. Cohn. A nonparametric approach to noisy and costly optimization.
In ICML, 2000.
[2] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult
learning control problems. 13:835?846, 1983.
[3] D. Bond and D. Lovley. Electricity production by geobacter sulfurreducens attached to electrodes. Applications of Environmental Microbiology, 69:1548?1555, 2003.
[4] E. Brochu, M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions,
with application to active user modeling and hierarchical reinforcement learning. Technical Report TR2009-23, Department of Computer Science, University of British Columbia, 2009.
[5] M. Brunato, R. Battiti, and S. Pasupuleti. A memory-based rash optimizer. In AAAI-06 Workshop on
Heuristic Search, Memory Based Heuristics and Their applications, 2006.
[6] E. H. Burrows, W.-K. Wong, X. Fern, F. W. Chaplen, and R. L. Ely. Optimization of ph and nitrogen
for enhanced hydrogen production by synechocystis sp. pcc 6803 via statistical and machine learning
methods. Biotechnology Progress, 25:1009?1017, 2009.
[7] M. F. G Nemhauser, L Wolsey. An analysis of the approximations for maximizing submodular set functions. Mathematical Programmingn, 14:265?294, 1978.
[8] Y. Guo and D. Schuurmans. Discriminative batch mode active learning. Proceedings of Advances in
Neural Information Processing Systems (NIPS2007), 6, 2007.
[9] V. P. Il?ev. An approximation guarantee of the greedy descent algorithm for minimizing a supermodular
set function. Discrete Applied Mathematics, 114(1-3):131?146, 2001.
[10] D. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global
Optimization, 21:345?383, 2001.
[11] L. Kaufman and P. J. Rousseeuw. Clustering by means of medoids. Statistical data analysis based on L1
norm, pages 405?416, 1987.
[12] D. Lizotte. Practical Bayesian optimization. PhD thesis, University of Alberta, 2008.
[13] S. Lloyd. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129?137,
1982.
[14] M. Locatelli. Bayesian algorithms for one-dimensional globaloptimization. J. of Global Optimization,
10(1):57?76, 1997.
[15] Z. Michalewicz. Genetic algorithms + data structures = evolution programs (2nd, extended ed.).
Springer-Verlag New York, Inc., New York, NY, USA, 1994.
[16] A. Moore and J. Schneider. Memory-based stochastic optimization. In NIPS, 1995.
[17] G. Nemhauser and L. Wolsey. Integer and combinatorial optimization. Wiley New York, 1999.
[18] D. Park and J. Zeikus. Improved fuel cell and electrode designs for producing electricity from microbial
degradation. Biotechnol.Bioeng., 81(3):348?355, 2003.
[19] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT, 2006.
[20] A. M. Ross. Computing Bounds on the Expected Maximum of Correlated Normal Variables . Methodology and Computing in Applied Probability, 2008.
[21] M. Schonlau. Computer Experiments and Global Optimization. PhD thesis, University of Waterloo, 1997.
[22] H. D. Vinod. Integer programming and the theory of grouping. Journal of the American Statistical
Association, 64(326):506?519, 1969.
9
| 4083 |@word exploitation:1 pcc:1 norm:1 nd:1 simulation:16 tried:1 covariance:2 reduction:1 initial:4 contains:2 selecting:13 denoting:1 genetic:1 interestingly:1 outperforms:2 existing:1 freitas:1 current:6 si:21 must:5 written:1 realize:1 remove:2 plot:3 update:1 v:2 greedy:11 selected:9 accordingly:1 xk:2 ith:1 rosenbrock:5 provides:4 mathematical:2 along:1 direct:1 fitting:1 introduce:1 x0:3 expected:13 behavior:5 simulator:1 decreasing:1 alberta:1 increasing:5 provided:1 xx:2 underlying:2 bounded:1 maximizes:3 notation:1 medium:1 fuel:5 kaufman:1 minimizes:2 informed:1 finding:1 guarantee:6 pseudo:1 unexplored:1 every:1 growth:1 exactly:1 k2:5 control:1 unit:3 normally:5 grant:1 appear:2 producing:1 continually:1 positive:1 local:1 limit:1 sutton:1 approximately:4 ap:2 might:2 black:1 argminb:1 examined:1 co:2 limited:1 bi:1 range:1 practical:3 unique:1 acknowledgment:1 practice:2 regret:16 implement:1 procedure:1 mei:11 area:1 empirical:3 maxx:2 significantly:5 matching:8 close:1 selection:13 context:1 influence:2 applying:1 wong:1 optimize:5 equivalent:1 demonstrated:1 center:1 maximizing:5 straightforward:1 williams:1 starting:1 immediately:1 schonlau:1 emax:18 rule:3 importantly:1 dominate:1 updated:1 enhanced:1 suppose:1 user:1 programming:1 element:9 expensive:1 particularly:1 fuelcell:3 observed:7 p5:1 afern:1 wj:1 trade:3 decrease:1 yk:2 pd:1 reward:1 depend:1 tight:1 efficiency:3 joint:2 represented:3 various:1 effective:4 describe:1 monte:4 outcome:6 whose:2 heuristic:4 larger:4 solve:2 distortion:1 ability:1 gp:11 noisy:2 advantage:7 sequence:1 indication:1 propose:2 achieve:2 ymax:2 description:1 exploiting:1 cluster:1 optimum:1 empty:1 electrode:2 produce:1 generating:1 executing:1 derive:2 develop:1 nearest:1 school:1 received:1 progress:1 strong:1 implemented:1 involves:1 come:1 closely:3 stochastic:1 exploration:1 proposition:2 exploring:1 hold:1 considered:2 normal:3 exp:1 wheeled:1 major:1 achieves:3 optimizer:1 smallest:1 purpose:1 outperformed:1 wet:1 combinatorial:2 currently:1 bond:1 ross:1 waterloo:1 largest:2 weighted:10 minimization:1 cora:1 mit:1 gaussian:6 always:1 rather:5 barto:1 derived:2 improvement:7 contrast:1 greedily:3 baseline:13 lizotte:1 dependent:1 nn:6 typically:2 a0:1 microbial:2 interested:1 selects:2 issue:1 arg:3 among:1 overall:1 classification:1 plan:2 initialize:1 bioeng:1 equal:3 once:2 sampling:1 represents:3 park:1 look:2 icml:1 jones:1 future:3 np:3 report:1 randomly:1 simultaneously:1 consisting:1 maintain:2 attempt:5 recalling:1 neuronlike:1 evaluation:13 circularity:1 behind:2 myopic:1 accurate:1 edge:1 bacteria:1 unless:1 theoretical:1 modeling:2 maximization:3 electricity:2 cost:3 pole:6 subset:1 successful:1 varies:1 eec:2 synthetic:1 rash:1 density:5 fundamental:1 off:3 together:1 w1:1 thesis:2 aaai:1 unavoidable:1 manage:1 containing:1 nano:2 worse:1 american:1 return:3 li:2 account:2 de:1 b2:4 lloyd:1 availability:1 includes:1 inc:1 oregon:1 ely:1 azimi:2 lab:1 observing:1 closed:6 characterizes:1 wm:1 hf:2 maintains:1 parallel:5 start:2 contribution:1 minimize:1 square:1 il:1 variance:3 conducting:1 efficiently:2 yield:2 correspond:2 bayesian:11 thumb:1 fern:3 carlo:4 anode:1 processor:1 inform:1 ed:1 definition:1 against:1 nitrogen:2 gain:1 treatment:1 knowledge:2 improves:1 sophisticated:1 brochu:1 appears:1 higher:1 supermodular:7 methodology:1 specify:1 response:1 improved:1 evaluated:3 done:1 box:1 anderson:2 furthermore:1 until:1 replacing:1 cohn:1 maximizer:1 mode:5 quality:2 aj:1 usa:1 contain:2 y2:2 evolution:1 hence:1 laboratory:2 iteratively:1 moore:1 sin:2 width:1 noted:1 cosine:5 stress:1 demonstrate:1 l1:1 novel:2 ef:4 conditioning:2 attached:1 association:1 slight:1 approximates:1 significant:3 refer:6 ai:1 mathematics:1 similarly:1 particle:1 submodular:5 moving:1 surface:1 base:5 add:1 posterior:18 closest:1 showed:1 optimizing:5 optimizes:1 certain:1 verlag:1 battiti:1 yi:3 sophisticate:1 minimum:2 additional:2 schneider:1 novelty:1 converge:1 maximize:2 signal:1 ii:1 multiple:6 desirable:3 alan:1 technical:1 match:6 controlled:1 variant:6 regression:1 controller:3 expectation:2 kernel:3 achieved:1 cell:5 addition:2 interval:1 ascent:1 cart:6 leveraging:1 effectiveness:1 obj:2 integer:2 leverage:2 noting:1 vinod:1 variety:1 xj:2 idea:5 requesting:2 six:5 motivated:1 returned:3 biotechnology:1 york:3 generally:3 clear:2 aimed:1 involve:1 amount:1 nonparametric:1 rousseeuw:1 ph:2 induces:1 outperform:2 nsf:1 tutorial:1 notice:1 medoid:29 per:6 discrete:2 promise:1 dropping:1 steepness:2 key:4 four:4 achieving:1 graph:4 run:8 parameterized:1 michalewicz:5 decision:1 bound:6 followed:1 nips2007:1 precisely:1 constraint:1 locatelli:1 x2:1 generates:1 simulate:1 relatively:2 department:1 according:2 request:2 minx0:1 across:2 smaller:2 making:2 s1:1 outlier:2 restricted:1 pr:1 medoids:1 xiaoli:1 computationally:2 previously:3 turn:1 synechocystis:1 end:1 decomposing:1 hierarchical:1 appropriate:1 batch:89 robustness:1 alternative:3 top:2 clustering:8 running:2 include:2 exploit:1 giving:1 build:1 objective:33 initializes:2 added:1 strategy:3 costly:4 primary:1 traditional:1 gradient:2 minx:1 nemhauser:2 simulated:3 outer:1 considers:1 collected:2 assuming:1 code:1 length:1 providing:1 minimizing:6 ratio:1 setup:2 difficult:3 unfortunately:1 taxonomy:1 negative:2 design:3 implementation:1 motivates:1 policy:59 unknown:14 perform:4 upper:3 observation:2 benchmark:13 finite:2 acknowledge:1 descent:1 defining:2 extended:1 y1:3 rn:1 introduced:1 specified:1 optimized:1 established:2 nip:1 able:1 beyond:1 below:2 xm:2 ev:1 program:1 max:1 including:3 memory:3 power:1 improve:1 brief:1 imply:2 realvalued:1 axis:2 xfern:1 created:1 catch:1 columbia:1 sn:1 prior:4 literature:2 oregonstate:1 removal:1 chaplen:1 loss:2 expect:1 wolsey:2 versus:1 validation:1 incurred:1 sufficient:1 balancing:1 production:4 summary:1 repeat:2 surprisingly:1 last:1 rasmussen:1 allow:2 neighbor:1 taking:1 sparse:1 distributed:5 curve:8 dimension:3 evaluating:1 cumulative:1 world:1 contour:2 author:1 collection:1 qualitatively:1 commonly:1 reinforcement:2 adaptive:1 far:1 transaction:1 approximate:2 preferred:1 unreliable:1 global:5 sequentially:1 active:4 investigating:1 b1:3 assumed:1 xi:5 discriminative:1 continuous:2 hydrogen:9 search:1 table:2 promising:2 robust:1 schuurmans:1 requested:3 warranted:1 domain:1 sp:1 main:5 motivation:2 noise:2 x1:6 javad:1 ny:1 wiley:1 burrow:1 removing:1 theorem:1 formula:1 british:1 showing:1 experimented:1 derives:1 intractable:1 workshop:1 quantization:1 grouping:1 sequential:47 effectively:1 adding:3 phd:2 execution:2 conditioned:3 budget:1 demand:2 simply:2 explore:1 pcm:1 bo:21 u2:1 monotonic:1 springer:1 corresponds:8 nested:1 environmental:1 cdf:1 goal:7 viewed:3 kmeans:1 lipschitz:1 hard:2 typical:1 except:1 degradation:1 total:2 experimental:13 e:1 exception:2 select:11 formally:3 support:1 guo:1 evaluate:7 correlated:1 |
3,406 | 4,084 | Online Learning in the Manifold of
Low-Rank Matrices
Uri Shalit?, Daphna Weinshall
Computer Science Dept. and ICNC
The Hebrew University of Jerusalem
[email protected]
[email protected]
Gal Chechik
Google Research and
The Gonda Brain Research Center
Bar Ilan University
[email protected]
Abstract
When learning models that are represented in matrix forms, enforcing a low-rank
constraint can dramatically improve the memory and run time complexity, while
providing a natural regularization of the model. However, naive approaches for
minimizing functions over the set of low-rank matrices are either prohibitively
time consuming (repeated singular value decomposition of the matrix) or numerically unstable (optimizing a factored representation of the low rank matrix).
We build on recent advances in optimization over manifolds, and describe an iterative online learning procedure, consisting of a gradient step, followed by a
second-order retraction back to the manifold. While the ideal retraction is hard to
compute, and so is the projection operator that approximates it, we describe another second-order retraction that can be computed efficiently, with run time and
memory complexity of O ((n + m)k) for a rank-k matrix of dimension m ? n,
given rank-one gradients. We use this algorithm, LORETA, to learn a matrixform similarity measure over pairs of documents represented as high dimensional
vectors. LORETA improves the mean average precision over a passive- aggressive approach in a factorized model, and also improves over a full model trained
over pre-selected features using the same memory requirements. LORETA also
showed consistent improvement over standard methods in a large (1600 classes)
multi-label image classification task.
1
Introduction
Many learning problems involve models represented in matrix form. These include metric learning,
collaborative filtering, and multi-task learning where all tasks operate over the same set of features.
In many of these models, a natural way to regularize the model is to limit the rank of the corresponding matrix. In metric learning, a low rank constraint allows to learn a low dimensional representation
of the data in a discriminative way. In multi-task problems, low rank constraints provide a way to
tie together different tasks. In all cases, low-rank matrices can be represented in a factorized form
that dramatically reduces the memory and run-time complexity of learning and inference with that
model. Low-rank matrix models could therefore scale to handle substantially many more features
and classes than with full rank dense matrices.
As with many other problems, the rank constraint is non-convex, and in the general case, minimizing
a convex function subject to a rank constraint is NP-hard [1] 1 . As a result, two main approaches have
been commonly used. Sometimes, a matrix W ? Rn?m of rank k is represented as a product of two
low dimension matrices W = AB T , A ? Rn?k , B ? Rm?k and simple gradient descent techniques
are applied to each of the product terms separately [3]. Second, projected gradient algorithms can
be applied by repeatedly taking a gradient step and projecting back to the manifold of low-rank
matrices. Unfortunately, computing the projection to that manifold becomes prohibitively costly for
large matrices and cannot be computed after every gradient step.
?
also at the Gonda Brain Research Center, Bar Ilan University
Some special cases are solvable (notably, PCA), relying mainly on singular value decomposition [2] and
semi-definite programming techniques. These methods scale poorly to large scale tasks.
1
1
Figure 1: A two step procedure for computing a retracted gradient. The first step
computes the Riemannian gradient ? (the
projection of the gradient onto the tan1
gent space Tx Mn,m
), yielding xt+ 2 =
k
t
t
t
x + ? ?L(x ). The second step computes the retraction onto the manifold
xt+1 = Rx (? t ).
In this paper we propose new algorithms for online learning on the manifold of low-rank matrices,
which are based on an operation called retraction. Retractions are operators that map from a vector
space that is tangent to the manifold, into the manifold. They include the projection operator as a
special case, but also include other retractions that can be computed dramatically more efficiently.
We use second order retractions to develop LORETA ? an online algorithm for learning low rank
matrices. It has a memory and run time complexity of O ((n + m)k) when the gradients have rank
one, a case which is relevant to numerous online learning problems as we show below.
We test Loreta in two different domains and learning tasks. First, we learn a bilinear similarity
measure among pairs of text documents, where the number of features (text terms) representing
each document could become very large. Loreta performed better than other techniques that operate
on a factorized model, and also improves retrieval precision by 33% as compared with training a
full rank model over pre-selected most informative features, using comparable memory footprint.
Second, we applied Loreta to image multi-label ranking, a problem in which the number of classes
could grow to millions. Loreta significantly improved over full rank models, using a fraction of the
memory required. These two experiments suggest that low-rank optimization could become very
useful for learning in high-dimensional problems.
This paper is organized as follows. We start with an introduction to optimization on manifolds,
describing the notion of retractions. We then derive our low-rank online learning algorithm, and test
it in two applications: learning similarity of text documents, and multi-label ranking for images.
2
Optimization on Riemannian manifolds
The field of numerical optimization on smooth manifolds has advanced significantly in the past
few years. We start with a short introduction to embedded manifolds, which are the focus of
this paper. An embedded manifold is a smooth subset of an ambient space Rn . For instance the
set {x : ||x||2 = 1, x ? Rn }, the unit sphere, is an n ? 1 dimensional manifold embedded in ndimensional space Rn . Here we focus on the manifold of low-rank matrices, namely, the set of
n ? m matrices of rank k where k < m, n. It is an (n + m)k ? k 2 dimensional manifold embedded
in Rn?m , which we denote Mn,m
. Embedded manifolds inherit many properties from the ambient
k
space, a fact which simplifies their analysis. For example, the Riemannian metric for embedded
manifolds is simply the Euclidean metric restricted to the manifold.
Motivated by online learning, we focus here on developing a stochastic gradient descent procedure
to minimize a loss function L over the manifold of low-rank matrices Mn,m
,
k
min
x
L(x)
s.t.
x ? Mn,m
k
.
(1)
To illustrate the challenge in this problem, consider a simple stochastic gradient descent algorithm
1
(Fig. 1). At every step t of the algorithm, a gradient step update takes xt+ 2 outside of the manifold
M and has to be mapped back onto the manifold. The most common mapping operation is the
1
projection operation, which, given a point xt+ 2 outside the manifold, would find the closest point
in M. Unfortunately, the projection operation is very expensive to compute for the manifold of
low rank matrices, since it basically involves a singular value decomposition. Here we describe a
wider class of operations called retractions, that serve a similar purpose: they find a point on the
manifold that is in the direction of the gradient. Importantly, we describe a specific retraction that
can be computed efficiently. Its runtime complexity depends on 4 quantities: the model matrix
dimensions m and n; its rank k; and the rank of the gradient matrix, r. The overall complexity is
O (n + m)(k + r)2 , and O ((n + m)k) for rank-one gradients, which are a very common case.
2
To explain how retractions are computed, we first describe the notion of a tangent space and the
Riemannian gradient of a function on a manifold.
Riemannian gradient and the tangent space
Each point x in an embedded manifold M has a tangent space associated with it, denoted Tx M
(see Fig. 1). The tangent space is a vector space of the same dimension as the manifold that can
be identified in a natural way with a linear subspace of the ambient space. It is usually simple to
compute the linear projection Px of any point in the ambient space onto the tangent space Tx M.
Given a manifold M and a differentiable function L : M ? R, the Riemannian gradient ?L(x)
of L on M at a point x is a vector in the tangent space Tx M. A very useful property of embedded
manifolds is the following: given a differentiable function f defined on the ambient space (and thus
on the manifold), the Riemannian gradient of f at point x is simply the linear projection Px of the
ordinary gradient of f onto the tangent space Tx M. An important consequence follows in case
the manifold represents the set of points obeying a certain constraint. In this case the Riemannian
gradient of f is equivalent to the ordinary gradient of the f minus the component which is normal
to the constraint. Indeed this normal component is exactly the component which is irrelevant when
performing constrained optimization.
1
The Riemannian gradient allows us to compute xt+ 2 = xt + ? t ?L(x), for a given iterate point xt
1
and step size ? t . We now examine how xt+ 2 can be mapped back onto the manifold.
Retractions
Intuitively, retractions capture the notion of ?going along a straight line? on the manifold. The
mathematically ideal retraction is called the exponential mapping: it maps the tangent vector ? ?
Tx M to a point along a geodesic curve which goes through x in the direction of ?. Unfortunately, for
many manifolds (including the low-rank manifold considered here) calculating the geodesic curve is
computationally expensive. A major insight from the field of Riemannian manifold optimization is
that using the exponential mapping is unnecessary since computationally cheaper retractions exist.
Formally, for a point x in an embedded manifold M, a retraction is any function Rx : Tx M ? M
which satisfies the following two conditions [4]: (1) Centering: Rx (0) = x. (2) Local rigidity: the
curve defined by ?? (? ) = Rx (? ?) satisfies ??? (0) = ?. It can be shown that any such retraction
approximates the exponential mapping to a first order [4]. Second-order retractions, which approximate the exponential
mapping
to second order around x, have to satisfy the following stricter
x (? ?)
conditions: Px dRd?
=
0,
for all ? ? Tx M, where Px is the linear projection from the
|
2
? =0
ambient space onto the tangent space Tx M. When viewed intrinsically, the curve Rx (? ?) defined
by a second-order retraction has zero acceleration at point x, namely, its second order derivatives
are all normal to the manifold. The best known example of a second-order retraction onto embedded manifolds is the projection operation [5]. Importantly, projections are viewed here as one type
of a second order approximation to the exponential mapping, which can be replaced by any other
second-order retractions, when computing the projection is too costly.
Given the tangent space and a retraction, we can now define a Riemannian gradient descent step for
the loss L at point xt ? M:
1
t
t
?
?
)), where ?L(x
)
(1) Gradient step: Compute xt+ 2 = xt + ? t , with ? t = ?L(xt ) = Pxt (?L(x
is the ordinary gradient of L in the ambient space.
(2) Retraction step: Compute xt+1 = Rxt (?? t ? t ), where ? t is the step size.
For a proper step size, this procedure can be proved to have local convergence for any retraction [4].
3
Online learning on the low rank manifold
Based on the retractions described above, we now present an online algorithm for learning lowrank matrices, by performing stochastic gradient descent on the manifold of low rank matrices.
At every iteration the algorithm suffers a loss, and performs a Riemannian gradient step followed
by a retraction to the manifold Mn,m
. Section 3.1 discusses general online updates. Section 3.2
k
discusses the very common case where the online updates induce a gradient of rank r = 1.
In what follows, a lowercase x denotes an abstract point on the manifold, lowercase Greek letters
like ? denote an abstract tangent vector, and uppercase Roman letters like A denote concrete matrix
3
representations as kept in memory (taking n ? m float numbers to store). We intermix the two
notations, as in ? = AZ, when the meaning is clear from the context. The set of n ? k matrices of
rank k is denoted R?n?k .
3.1
The general LORETA algorithm
We start with a Lemma that gives a representation of the tangent space Tx M, extending the constructions given in [6] to the general manifold of low-rank matrices. The proof is given in the
supplemental material.
Lemma 1. Let x ? Mn,m
have a (non-unique) factorization x = AB T , where A ? R?n?k , B ?
k
n?(n?k)
m?k
and B? ? Rm?(m?k) be the orthogonal complements of A and B
R? . Let A? ? R
T
T
T
respectively, such that A? A = 0, B?
B = 0, AT? A? = In?k , B?
B? = Im?k . The tangent space
n,m
to Mk at x is:
M N1T B T
k?k
(m?k)?k
(n?k)?k
(2)
:
M
?
R
,
N
?
R
,
N
?
R
Tx M = [A A? ]
1
2
T
N2
0
B?
Let ? ? Mn,m
be a tangent vector to x = AB T . From the characterization above it follows that
k
? can be decomposed in a unique manner into three orthogonal components: ? = ? S + ?lP + ?rP ,
T
where ? S = AM B T , ?lP = AN1T B?
and ?rP = A? N2 B T . In online learning we are repeatedly
given a rank-r gradient matrix Z, and want to compute a step on Mn,m
in the direction of Z. As
k
a first step we wish to find its projection Px (Z) onto the tangent space. Specifically, we wish to
T
find the three matrices M , N1 and N2 such that Px (Z) = AM B T + AN1T B?
+ A? N2 B T . Since
?1 T
A . The matrix
we assume A is of full column rank, its pseudo-inverse A? obeys A? = AT A
?
projecting onto A?s columns, denoted PA , is exactly equal to AA . We can similarly define PA? ,PB
and PB? . A straightforward computation shows that for a given matrix Z, we have M = A? ZB ?T ,
T T ?T
T
N1 = B?
Z A , N2 = AT? ZB?
, yielding ? S = PA ZPB , ?lP = PA ZPB? , ?rP = PA? ZPB .
The following theorem defines the retraction that we use. The proof is given in the supplemental
material.
Theorem 1. Let x ? Mn,m
, x = AB T , and x? = B ?T A? = B(B T B)?1 (AT A)?1 AT (this holds
k
since we assume A and B are of full column rank). Let ? ? Tx Mn,m
, ? = ? S + ?lP + ?rP , as
k
described above, and let
1
1
1
w1 = x + ? S + ?rP ? ? S x? ? S ? ?rP x? ? S ,
(3)
2
8
2
1
1
1
w2 = x + ? S + ?lP ? ? S x? ? S ? ? S x? ?lP .
2
8
2
The mapping Rx (?) = w1 x? w2 is a second order retraction from a neighborhood ?x ? Tx Mn,m
k
to Mn,m
.
k
We now have the ingredients necessary for a Riemannian stochastic gradient descent algorithm.
Algorithm 1 : Naive Riemannian stochastic gradient descent
Input: Matrices A ? R?n?k , B ? Rm?k
s.t. x = AB T . Matrices G1 ? Rn?r , G2 ? Rm?r s.t. G1 GT2 =
?
n?m
?
?
??? = ?? ?L(x)
?R
, where ?L(x)
is the gradient in the ambient space and ? > 0 is the step size.
Output: Matrices Z1 ? R?n?k , Z2 ? Rm?k
such that Z1 Z2T = Rx (???).
?
Compute:
matrix dimension
A? = (AT A)?1 AT , B ? = (B T B)?1 B T
k ? n, k ? m
A? , B? = orthogonal complements of A, B
n ? (n ? k), m ? (m ? k)
M = A? G1 GT2 B ?T
k?k
T
N1 = B?
G2 GT1 A?T , N2 =
AT? G1 GT2 B ?T
(m ? k) ? k, (n ? k) ? k
Z1 = A Ik + 21 M ? 81 M 2 + A? N2 Ik ? 21 M
n?k
m?k
Z2 = B Ik + 12 M T ? 18 (M T )2 + B? N1 Ik ? 21 M T
?
Given a gradient in the ambient space ?L(x),
we can calculate the matrices M , N1 and N2 which
allow us to represent its projection onto the tangent space, and furthermore allow us to calculate the
retraction. The procedure is outlined in algorithm 1, with some rearranging and term collection.
4
Algorithm 1 explicitly computes and stores the orthogonal complement matrices A? and B? , which
in the low rank case k ? m, n, have size O(mn) as the original x. To improve the memory
complexity, we use the fact that the matrices A? and B? always operate with their transpose. Since
they are orthogonal, the matrix A? AT? is a projection matrix, one which we denoted earlier by PA? ,
and likewise for B? . Because of the orthogonal complementarity, these projection matrices are
equal to In ? PA and Im ? PB respectively. We use this identity to reformulate the algorithm such
that only matrices of size at most max(n, m)?k or max(n, m)?r are kept in memory. The runtime
complexity of Algorithm 2 can
be easily computed based on matrix multiplications complexity, and
equals O (n + m)(k + r)2 .
Algorithm 2 : General Riemannian stochastic gradient descent
Input and Output: As in Algorithm 1
Compute:
A? = (AT A)?1 AT , B ? = (B T B)?1 B T
? = B ? ? G2
A? = A? ? G1 , B
?
P rojAG = A ? A
? T ? A?
Q=B
A? = ? 21 P rojAG + 38 P rojAG ? Q + G1 ? 21 G1 ? Q
?T
Z1 = A + A? ? B
GBproj = GT2 B ? B ?
B ? = ? 12 GBproj + 38 Q ? GBproj + GT2 ? 21 Q ? GT2
Z2T = B T + A? ? B ?
matrix dimension
k ? n, k ? m
k ? r, k ? r
n?r
r?r
n?r
n?k
r?m
r?m
k?m
Algorithm 3 , Loreta-1: Rank-one Riemannian stochastic gradient descent
Input: Matrices A ? R?n?k , B ? Rm?k
s.t. x = AB T . Matrices A? and B ? , the pseudo-inverses of A and
?
n?1
?
?
B respectively. Vectors G1 ? R
, G2 ? Rm?1 s.t. G1 GT2 = ??? = ?? ?L(x)
? Rn?m , where ?L(x)
is the gradient in the ambient space and ? > 0 is the step size.
Output: Matrices Z1 ? R?n?k , Z2 ? Rm?k
s.t. Z1 Z2T = Rx (???). Matrices Z1? and Z2? , the pseudo-inverses
?
of Z1 and Z2 respectively.
Compute:
matrix dimension
? = B ? ? G2
A? = A? ? G1 , B
k?1
P rojAG = A ? A?
n?1
? T ? A?
Q=B
1?1
?
n?1
A = P rojAG ? 21 + 83 Q + G1 (1 ? 21 Q)
?T
Z1 = A + A? ? B
n?k
T
GBproj = G2 B ? B ?
1?m
B ? = GBproj ? 12 + 83 Q + GT2 (1 ? 12 Q)
1?m
Z2T = B T + A? ? B ?
k?m
?
k?n
Z1? = rank one pseudoinverse update(A, A? , A? , B)
?
?
? ?
Z2 = rank one pseudoinverse update(B, B , B , A)
k?m
3.2
LORETA with rank-one gradients
In many learning problems, the gradient matrix required for a gradient step update has a rank of one.
This is the case for example, when the matrix model W acts as a bilinear form on two vectors, p and
q, and the loss is a linear function of pT W q (as in [7, 8], and Sec. 5.1). In that case, the gradient
is the rank-one, outer product matrix pqT . As another example, consider the case of multitask
learning, where the matrix model W operates on a vector input p, and the loss is the squared loss
between the multiple predictions W p and the true labels q: kW p ? qk2 . The gradient of the loss
is (W p ? q) pT , which is again a rank-one matrix. We now show how to reduce the complexity of
each iteration to be linear in the model rank k when the rank of the gradient matrix r is one.
Given rank-one gradients (r = 1), the most computationally demanding step in Algorithm 2 is the
computation of the pseudo-inverse of the matrices A and B, taking O(nk 2 ) and O(mk 2 ) operations.
All other operations are O(max(n, m)k) at most. For r = 1 the outputs Z1 and Z2 become rank-one
updates of the input matrices A and B. This enables us to keep the pseudo-inverses A? and B ? from
the previous round, and perform a rank-one update to them, following a procedure developed by [9].
5
This procedure is similar to the better known Sherman-Morrison formula for the inverse of a rankone perturbed matrix, and its computational complexity for an n ? k matrix is O(nk) operations.
Using that procedure, we derive our final algorithm, Loreta-1, the rank-one Riemannian stochastic
gradient descent. Its overall time and space complexity are both O((n + m)k) per gradient step.
The memory requirement of Loreta-1 is about 4nk (assuming m = n), since it receives four input
matrices of size nk (A, B, A? , B ? ) and assuming it can compute the four outputs (Z1 , Z2 , Z1? , Z2? ),
in-place while destroying previously computed terms.
4
Related work
A recent summary of many advances in the field of optimization on manifolds is given in [4]. More
specific to the field of low rank matrix manifolds, some work has been done on the general problem
of optimization with low rank positive semi-definite (PSD) matrices. These include [10] and [6]; the
latter introduced the retraction for PSD matrices which we extended here to general low-rank matrices. The problem of minimizing a convex function over the set of low rank matrices, was addressed
by several authors, including [11], and [12] which also considers additional affine constraints, and
its connection to recent advances in compresses sensing. The main tools used in these works are the
trace norm (sum of singular values) and semi-definite programming. See also [2].
More closely related to the current paper are the works by Kulis et al. [13] and Meka et al. [14]. The
first deals with learning low rank PSD matrices, and uses the rank-preserving log-det divergence and
clever factorization and optimization in order to derive an update rule with runtime complexity of
O(nk 2 ) for an n ? n matrix of rank k. The second uses online learning in order to find a minimal
rank square matrix under approximate affine constraints. The algorithm does not directly allow a
factorized representation, and depends crucially on an ?oracle? component, which typically requires
to compute an SVD. Multi-class ranking with a large number of features was studied in [3].
5
Experiments
We tested Loreta-1 in two learning tasks: learning a similarity measure between pairs of text documents using the 20-newsgroups data collected by [15], and learning to rank image label annotations
based on a multi-label annotated set, using the imagenet dataset [16].2
5.1 Learning similarity on the 20 Newsgroups data set
In our first set of experiments, we looked at the problem of learning a similarity measure between
pairs of text documents. Similarity learning is a well studied problem, closely related to metric
learning (see [17] for a review). It has numerous applications in information retrieval such as query
by example, and finding related content on the web.
One approach to learn pairwise relations is to measure the similarity of two documents p, q ? Rn
using a bilinear form SW (p, q) = pT W q parametrized by a model W ? Rn?n . Such models
can be learned using standard online methods [8], and were shown to achieve high precision. Unfortunately, since the number of parameters grows as n2 , storing the matrix W in memory is only
feasible for limited feature dimensionality. To handle larger vocabularies, like those containing all
textual terms found in a corpus, a common approach is to pre-select a subset of the features and train
a model over the low dimensional data. However, such preprocessing may remove crucial signals in
the data even if features are selected in a discriminative way.
To overcome this difficulty, we used Loreta-1 to learn a rank-k parametrization of the model W ,
which can be factorized as W = AB T , where A, B ? Rn?k . In each of our experiments, we
selected a subset of n features, and trained a rank k model. We varied the number of features n and
the rank of the matrix k so as to use a fixed amount of memory. For example, we used a rank-10
model with 50K features, and a rank-50 model with 10K features.
Similarity learning with Loreta-1. We use an online procedure similar to that in [7, 8]. At each
round, three instances are sampled: a query document q, and two documents p1 and p2 such that
p1 is known to be more similar to q that p2 . We wish that the model assigns a higher similarity
score to the pair (q, p1 ) than the pair (q, p2 ), hence use the online ranking hinge loss defined as
lW (q, p1 , p2 ) = [1 ? SW (q, p1 ) + SW (q, p2 )]+ .
2
Matlab code for Loreta-1 can be provided upon request.
6
Data preprocessing and feature selection. We used the 20 newsgroups data set (people.csail.mit.edu/jrennie/20Newsgroups), containing 20 classes with approximately 1000 documents
each. We removed stop words but did not apply stemming. We selected features that conveyed high
information about the identity of the class (over the training set) using the infogain criterion [18].
The selected features were normalized using tf-idf, and then represented each document as a bag of
words. Two documents were considered similar if they shared the same class label.
Experimental procedure and evaluation protocol. The 20 newsgroups site proposes a split of
the data into train and test sets. We repeated splitting 5 times based on the sizes of the proposed
splits (a train / test ratio of 65% / 35%). We evaluated the learned similarity measures using a
ranking criterion. We view every document q in the test set as a query, and rank the remaining test
documents p by their similarity scores qT W p. We then compute the precision (fraction of positives)
at the top r ranked documents. We further compute the mean average precision (mAP), a widely
used measure in the information retrieval community, which averages over all values of r.
Comparisons. We compared Loreta with the following approaches. (1) A direct gradient descent
? = AB T . Stochastic
(GD) similar to [3]. The model is represented as a product of two matrices W
gradient descent steps are computed over the factors A and B, for the same loss used by Loreta
lW (q, p1 , p2 ). The step size ? was selected using cross validation. The GD steps are: Anew =
A+?q(p1 ?p2 )T B, and Bnew = A+?(p1 ?p2 )qT A. (2) Iterative Passive-Aggressive (PA). We
found the above GD procedure to be very unstable, often causing the models to diverge. We therefore
used a related online algorithm from the family of passive-aggressive algorithms [19]. We iteratively
optimize over A given a fixed B and vice versa. The optimization is a tradeoff between minimizing
the loss lW , and limiting how much the models change at each iteration. The steps size for updating
lW (q,p1 ,p2 ))
1 ,p2 ))
A is computed to be ?A = max( kqkl2WkB(q,p
T (p ?p )k2 , C), and ?B = max( k(p ?p )k2 kAT qk2 , C). C
1
2
1
2
is a predefined parameter controlling the maximum magnitude of the step size. This procedure is
numerically more stable because of the normalization by the norms of the matrices multiplied by
the gradient factors. (3) Naive Passive-Aggressive (PA v2) This method is similar to the iterative
lW (q,p1 ,p2 ))
PA above, with the step size computed as with unfactored matrices ? = max( kqk
2 k(p ?p )k2 , C).
1
2
(4) Full rank similarity learning models. We compared with two online metric learning methods,
LEGO [20] and OASIS [8]. Both algorithms learn a full (non-factorized) model, and were run with
n = 1000, in order to be consistent with the memory constraint of Loreta-1. We have not compared
with batch approaches such as [13]
Figure 2b shows the mean average precision obtained with the three measures. Loreta outperforms
the PA approach across all ranks. More importantly, learning a low rank model of rank 30, using the
best 16660 features, is significantly more precise than learning a much fuller model of rank 100 and
5000 features. The intuition is that Loreta can be viewed as adaptively learning a linear projection
of the data into low dimensional space, which is tailored to the pairwise similarity task.
5.2 Image multilabel ranking
Our second set of experiments tackled the problem of learning to rank labels for images taken from
a large number of classes (L = 1661) with multiple labels per image.
In our approach, we learn a linear classifier over n features per each label c ? C = {1, . . . , L}, and
stack all models together to a single matrix W ? RL?n . At test time, given an image p ? Rn , the
product W p provides scores for every label per that image p. Given a ground truth labeling, a good
model would rank the true labels higher than the false ones. Each row of the matrix model can be
thought of as a sub-model for the corresponding label. Imposing a low rank constraint on the model
implies that these sub-models are linear combinations of a smaller number of latent models.
Online learning of label rankings with Loreta-1. At each iteration, an image p is sampled, and
using the current model W the scores for all its labels were computed, W p. These scores are
compared with the ground truth labeling y = {y1 , . . . , yr } ? C. The learner suffers a multilabel
multiclass hinge loss as follows. Let y? = argmaxs?y
/ (W p)s , be the negative label which obtained
th
the highest score,
Prwhere (W p)s is the s component of the score vector (W p). The loss is then
L(W, p, y) =
i=1 [(W p)y? ? (W p)yi + 1]+ . We then used the subgradient G of this loss for
Loreta: for the set of indices i1 , i2 , . . . id ? y which incurred a non zero hinge loss, the ij row of G
is p, and for the row y? we set G to be ?d ? p. The matrix G is rank one, unless no loss was suffered
in which case it is 0. .
7
(a) 20 Newsgroups
(b) 20 Newsgroups
(c) ImageNet
0.6
0.5
0.4
rank = 100
rank = 75
rank = 50
rank = 40
rank = 30
rank = 20
rank = 10
0.3
0.2
0.1
0
0
100
200
300
iterations (thousands)
400
mean average precision (mAP)
0.7
mean average precision (mAP)
mean average precision (MAP)
0.8
0.72
0.68
0.61
LORETA
PA
PA v2
OASIS
LEGO
0.45
10
30
50
75
matrix rank k
100
1000
0.109
0.074
Loreta?1
Loreta?1 rand. init.
Iterative PA
Iterative PA rand. init.
Matrix Perceptron
Group MC Perceptron
0.02
0
10 50
150
250
400
1000
matrix rank k
Figure 2: (a) Mean average precision (mAP) over 20 newsgroups test set as traced along Loreta learning for
various ranks. Curve values are averages over 5 train-test splits. (b) mAP of different models with varying
rank. For each rank, a different number of features was selected using an information gain criterion, such that
the total memory requirement is kept fixed (number of features ? rank is constant). 50000 features were used
for rank = 10. LEGO and OASIS were trained with the same memory (using 1000 features and rank=1000).
Error bars denote the standard error of the mean over 5 train-test splits (s.e.m.). (c) ImageNet data. mAP as a
function of the rank k. Curves are means over three train-test splits. Error bars denote the standard error of the
mean (s.e.m.). All hyper parameters were selected using cross validation. Models were initialized either with
k ones along the diagonal, or as a product of rank-k matrices with random normal entries (denoted rand. init.).
Data set and preprocessing.
We used a subset of the ImageNet 2010 Challenge
(www.imagenet.org/challenges/LSVRC/2010/) containing images labeled with respect to the WordNet hierarchy. Each image was manually labeled with a single class label (for a total of 1000 classes).
We added labels for each image, using classes along the path to the root of the hierarchy (adding 676
classes in total). We discarded ancestor labels covering more than 10% of the images, leaving 1661
labels (5.3 labels per image on average). We used ImageNets bag of words representation, based on
vector quantizing SIFT features with a vocabulary of 1000 words, followed by tf-idf normalization.
Experimental procedure and evaluation protocol. We split the data into 30 training and 20 testing
images per every base level label. The quality of the learned label ranking, was evaluated using the
mean average precision (mAP) criterion mentioned above.
Comparisons. We compared the performance of Loreta on this task with three other approaches:
(1) PA: Iterative Passive-Aggressive as described above. (2) Matrix Perceptron: a full rank
conservative gradient descent (3) Group Multi-Class Perceptron a mixed (2,1) norm online mirror
descent algorithm [21]. Loreta and PA were run using a range of different model ranks. For all three
methods the step size (or C parameter for the PA) was chosen by 5-fold validation on the test set.
Figure Fig. 2c plots the mAP precision of Loreta and PA for different model ranks, while showing
on the right the mAP of the full rank 1000 gradient descent and (2, 1) norm algorithms. Loreta
significantly improves over all other methods across all ranks.
6
Discussion
We presented Loreta, an algorithm which learns a low-rank matrix based on stochastic Riemannian
gradient descent and efficient retraction to the manifold of low-rank matrices. Loreta achieves superior precision in a task of learning similarity in high dimensional feature spaces, and in multi-label
annotation, where it scales well with the number of classes.
Loreta yields a factorized representation of the low rank matrix. For classification, it can be viewed
as learning two matrix components: one that projects the high dimensional data into a low dimension, and a second that learns to classify in the low dimension. It may become useful in the future
for exploring high dimensional data, or extract relations between large number of classes.
Acknowledgments
This work was supported by the Israel Science Foundation (ISF) and by the European Union under
the DIRAC integrated project IST-027787.
8
References
[1] B.K. Natarajan. Sparse approximate solutions to linear systems. SIAM journal on computing,
24(2):227?234, 1995.
[2] M. Fazel, H. Hindi, and S. Boyd. Rank minimization and applications in system theory. In
Proceedings of the 2004 American Control Conference, pages 3273?3278. IEEE, 2005.
[3] B. Bai, J. Weston, R. Collobert, and D. Grangier. Supervised semantic indexing. Advances in
Information Retrieval, pages 761?765, 2009.
[4] P.A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds.
Princeton Univ Press, 2008.
[5] P.-A. Absil and J?er?ome Malick. Projection-like retractions on matrix manifolds. Technical Report UCL-INMA-2010.038, Department of Mathematical Engineering, Universit?e catholique
de Louvain, July 2010.
[6] B. Vandereycken and S. Vandewalle. A Riemannian optimization approach for computing lowrank solutions of Lyapunov equations. SIAM Journal on Matrix Analysis and Applications,
31:2553, 2010.
[7] D. Grangier D. and S. Bengio. A discriminative kernel-based model to rank images from text
queries. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30:1371?1384,
2008.
[8] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through ranking. Journal of Machine Learning Research, 11:1109?1135, 2010.
[9] C.D. Meyer. Generalized inversion of modified matrices. SIAM Journal on Applied Mathematics, 24(3):315?323, 1973.
[10] M. Journee, F. Bach, PA Absil, and R. Sepulchre. Low-Rank Optimization on the Cone of
Positive Semidefinite Matrices. SIAM Journal on Optimization, 20:2327?2351, 2010.
[11] M. Fazel. Matrix rank minimization with applications. PhD thesis, Electrical Engineering
Department, Stanford University, 2002.
[12] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of
linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[13] B. Kulis, M.A. Sustik, and I.S. Dhillon. Low-rank kernel learning with bregman matrix divergences. The Journal of Machine Learning Research, 10:341?376, 2009.
[14] R. Meka, P. Jain, C. Caramanis, and I.S. Dhillon. Rank minimization via online learning. In
Proceedings of the 25th International Conference on Machine learning, pages 656?663, 2008.
[15] K. Lang. Learning to filter netnews. In Proceeding of the 12th Internation Conference on
Machine Learning, pages 331?339, 1995.
[16] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. ImageNet: a large-scale hierarchical image database. In Proceedings of the 22nd IEEE Conference on Computer Vision and
Pattern Recognition, 2009.
[17] L. Yang. An overview of distance metric learning. Technical report, School of Computer
Science, Carnegie Mellon University, 2007.
[18] Y. Yang and J.O. Pedersen. A comparative study on feature selection in text categorization. In
Proceedings of the 14th International Conference on Machine learning, pages 412?420, 1997.
[19] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. Journal of Machine Learning Research, 7:551?585, 2006.
[20] P. Jain, B. Kulis, I.S. Dhillon, and K. Grauman. Online metric learning and fast similarity
search. Advances in Neural Information Processing Systems, pages 761?768, 2008.
[21] Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Regularization techniques for
learning with matrices, 2010. preprint.
9
| 4084 |@word multitask:1 kulis:3 inversion:1 norm:5 nd:1 dekel:1 crucially:1 decomposition:3 infogain:1 minus:1 sepulchre:2 bai:1 score:7 document:15 past:1 outperforms:1 current:2 com:1 z2:9 lang:1 stemming:1 numerical:1 informative:1 enables:1 remove:1 plot:1 update:9 intelligence:1 selected:9 yr:1 parametrization:1 short:1 characterization:1 provides:1 org:1 mathematical:1 along:5 direct:1 become:4 ik:4 manner:1 pairwise:2 notably:1 indeed:1 p1:10 examine:1 multi:9 brain:2 relying:1 decomposed:1 becomes:1 provided:1 project:2 notation:1 factorized:7 what:1 weinshall:1 israel:1 substantially:1 developed:1 supplemental:2 finding:1 gal:2 pseudo:5 every:6 act:1 inma:1 tie:1 runtime:3 prohibitively:2 classifier:1 rm:8 exactly:2 stricter:1 k2:3 unit:1 control:1 universit:1 grauman:1 positive:3 engineering:2 local:2 limit:1 consequence:1 bilinear:3 id:1 path:1 approximately:1 studied:2 factorization:2 z2t:4 limited:1 range:1 obeys:1 fazel:3 unique:2 acknowledgment:1 testing:1 union:1 definite:3 kat:1 footprint:1 procedure:13 significantly:4 thought:1 projection:18 chechik:2 pre:3 induce:1 word:4 boyd:1 suggest:1 cannot:1 onto:11 clever:1 operator:3 selection:2 mahony:1 context:1 optimize:1 equivalent:1 map:12 www:1 center:2 destroying:1 jerusalem:1 go:1 straightforward:1 convex:3 splitting:1 assigns:1 factored:1 insight:1 rule:1 importantly:3 nuclear:1 regularize:1 handle:2 notion:3 limiting:1 construction:1 pt:3 controlling:1 hierarchy:2 programming:2 us:2 pa:20 complementarity:1 expensive:2 natarajan:1 updating:1 recognition:1 labeled:2 database:1 preprint:1 electrical:1 capture:1 calculate:2 thousand:1 removed:1 highest:1 mentioned:1 intuition:1 benjamin:1 complexity:13 geodesic:2 n1t:1 multilabel:2 trained:3 bnew:1 serve:1 upon:1 learner:1 easily:1 represented:7 tx:13 various:1 caramanis:1 train:6 univ:1 jain:2 fast:1 describe:5 query:4 labeling:2 hyper:1 outside:2 neighborhood:1 netnews:1 shalev:2 larger:1 widely:1 stanford:1 loreta:35 g1:11 final:1 online:24 unfactored:1 differentiable:2 quantizing:1 ucl:1 propose:1 maryam:1 product:6 causing:1 relevant:1 argmaxs:1 ome:1 poorly:1 achieve:1 dirac:1 az:1 convergence:1 requirement:3 extending:1 comparative:1 categorization:1 rankone:1 wider:1 derive:3 develop:1 ac:2 illustrate:1 ij:1 qt:2 school:1 lowrank:2 p2:11 c:1 involves:1 implies:1 lyapunov:1 direction:3 greek:1 closely:2 annotated:1 filter:1 stochastic:10 material:2 mathematically:1 im:2 exploring:1 hold:1 around:1 considered:2 ground:2 normal:4 mapping:7 major:1 achieves:1 purpose:1 bag:2 label:24 vice:1 tf:2 tool:1 gt1:1 minimization:4 mit:1 always:1 modified:1 varying:1 focus:3 improvement:1 rank:109 mainly:1 absil:3 am:2 inference:1 lowercase:2 typically:1 integrated:1 relation:2 ancestor:1 going:1 i1:1 overall:2 classification:2 among:1 denoted:5 malick:1 proposes:1 constrained:1 special:2 field:4 equal:3 fuller:1 manually:1 represents:1 kw:1 future:1 pxt:1 np:1 report:2 roman:1 few:1 divergence:2 cheaper:1 gent:1 replaced:1 consisting:1 n1:5 ab:8 psd:3 vandereycken:1 evaluation:2 yielding:2 semidefinite:1 uppercase:1 predefined:1 ambient:10 bregman:1 necessary:1 orthogonal:6 unless:1 euclidean:1 initialized:1 shalit:3 minimal:1 mk:2 instance:2 column:3 classify:1 earlier:1 ordinary:3 subset:4 entry:1 vandewalle:1 too:1 perturbed:1 gd:3 adaptively:1 recht:1 international:2 siam:5 huji:2 csail:1 dong:1 diverge:1 together:2 concrete:1 qk2:2 w1:2 squared:1 again:1 thesis:1 containing:3 american:1 derivative:1 li:2 aggressive:6 ilan:2 parrilo:1 de:1 sec:1 satisfy:1 gt2:8 explicitly:1 ranking:9 depends:2 collobert:1 performed:1 view:1 root:1 start:3 annotation:2 shai:1 collaborative:1 minimize:1 il:2 square:1 gonda:2 efficiently:3 likewise:1 yield:1 pedersen:1 basically:1 mc:1 rx:8 straight:1 explain:1 suffers:2 retraction:33 centering:1 associated:1 riemannian:19 proof:2 sampled:2 stop:1 proved:1 dataset:1 intrinsically:1 gain:1 improves:4 dimensionality:1 organized:1 back:4 higher:2 supervised:1 improved:1 rand:3 done:1 evaluated:2 furthermore:1 receives:1 web:1 google:2 drd:1 defines:1 quality:1 grows:1 normalized:1 true:2 regularization:2 hence:1 iteratively:1 dhillon:3 i2:1 semantic:1 deal:1 round:2 covering:1 criterion:4 generalized:1 performs:1 passive:6 image:19 meaning:1 common:4 superior:1 rl:1 overview:1 million:1 approximates:2 numerically:2 isf:1 mellon:1 versa:1 imposing:1 meka:2 outlined:1 mathematics:1 similarly:1 grangier:2 sherman:1 jrennie:1 stable:1 similarity:17 base:1 closest:1 recent:3 showed:1 optimizing:1 irrelevant:1 store:2 certain:1 yi:1 preserving:1 minimum:1 additional:1 deng:1 sharma:1 signal:1 morrison:1 semi:3 full:10 multiple:2 sham:1 reduces:1 july:1 smooth:2 technical:2 cross:2 sphere:1 retrieval:4 bach:1 prediction:1 vision:1 metric:8 iteration:5 sometimes:1 represent:1 normalization:2 tailored:1 kernel:2 want:1 separately:1 addressed:1 singular:4 grow:1 float:1 crucial:1 suffered:1 w2:2 operate:3 leaving:1 subject:1 lego:3 yang:2 ideal:2 split:6 bengio:2 iterate:1 newsgroups:8 identified:1 reduce:1 simplifies:1 tradeoff:1 multiclass:1 det:1 motivated:1 pca:1 repeatedly:2 matlab:1 dramatically:3 useful:3 tewari:1 clear:1 involve:1 amount:1 pqt:1 exist:1 per:6 carnegie:1 group:2 ist:1 four:2 pb:3 traced:1 kqk:1 kept:3 subgradient:1 fraction:2 year:1 sum:1 cone:1 run:6 inverse:6 letter:2 place:1 family:1 comparable:1 followed:3 guaranteed:1 tackled:1 fold:1 oracle:1 constraint:11 idf:2 fei:2 min:1 performing:2 px:6 department:2 developing:1 request:1 combination:1 across:2 smaller:1 lp:6 kakade:1 projecting:2 restricted:1 intuitively:1 indexing:1 taken:1 computationally:3 equation:2 previously:1 describing:1 discus:2 singer:1 sustik:1 operation:9 multiplied:1 apply:1 hierarchical:1 v2:2 batch:1 rp:6 original:1 compress:1 denotes:1 remaining:1 include:4 top:1 hinge:3 sw:3 daphna:2 calculating:1 build:1 added:1 quantity:1 looked:1 costly:2 diagonal:1 gradient:53 subspace:1 distance:1 mapped:2 parametrized:1 outer:1 manifold:51 mail:1 considers:1 unstable:2 collected:1 enforcing:1 assuming:2 code:1 index:1 reformulate:1 providing:1 minimizing:4 hebrew:1 ratio:1 unfortunately:4 trace:1 negative:1 proper:1 perform:1 discarded:1 descent:16 extended:1 precise:1 y1:1 rn:12 varied:1 stack:1 community:1 introduced:1 complement:3 pair:6 required:2 namely:2 pablo:1 z1:13 connection:1 imagenet:6 journee:1 louvain:1 learned:3 textual:1 bar:4 below:1 usually:1 pattern:2 challenge:3 ambuj:1 including:2 memory:16 max:6 icnc:1 natural:3 demanding:1 difficulty:1 ranked:1 solvable:1 ndimensional:1 advanced:1 mn:13 representing:1 hindi:1 improve:2 numerous:2 naive:3 extract:1 text:7 review:2 tangent:17 multiplication:1 embedded:10 loss:15 mixed:1 filtering:1 ingredient:1 validation:3 foundation:1 incurred:1 conveyed:1 affine:2 consistent:2 storing:1 row:3 summary:1 supported:1 transpose:1 catholique:1 allow:3 perceptron:4 taking:3 sparse:1 curve:6 dimension:9 vocabulary:2 overcome:1 computes:3 author:1 commonly:1 collection:1 projected:1 preprocessing:3 transaction:1 approximate:3 keep:1 pseudoinverse:2 retracted:1 anew:1 corpus:1 unnecessary:1 consuming:1 discriminative:3 shwartz:2 search:1 iterative:6 latent:1 learn:7 rearranging:1 init:3 european:1 domain:1 protocol:2 inherit:1 did:1 dense:1 main:2 n2:9 repeated:2 fig:3 site:1 precision:13 sub:2 meyer:1 wish:3 obeying:1 exponential:5 lw:5 learns:2 theorem:2 formula:1 xt:13 specific:2 sift:1 showing:1 er:1 sensing:1 socher:1 false:1 adding:1 mirror:1 phd:1 magnitude:1 keshet:1 uri:2 nk:5 simply:2 g2:6 aa:1 oasis:3 truth:2 satisfies:2 weston:1 viewed:4 identity:2 acceleration:1 internation:1 shared:1 content:1 hard:2 feasible:1 change:1 specifically:1 lsvrc:1 operates:1 wordnet:1 rxt:1 lemma:2 called:3 zb:2 total:3 conservative:1 svd:1 experimental:2 formally:1 select:1 people:1 latter:1 crammer:1 dept:1 princeton:1 tested:1 rigidity:1 |
3,407 | 4,085 | Why are some word orders more common than
others? A uniform information density account
Luke Maurits, Amy Perfors & Daniel Navarro
School of Psychology,
University of Adelaide,
Adelaide, South Australia, 5000
{luke.maurits, amy.perfors, daniel.navarro}@adelaide.edu.au
Abstract
Languages vary widely in many ways, including their canonical word order. A
basic aspect of the observed variation is the fact that some word orders are much
more common than others. Although this regularity has been recognized for
some time, it has not been well-explained. In this paper we offer an informationtheoretic explanation for the observed word-order distribution across languages,
based on the concept of Uniform Information Density (UID). We suggest that
object-first languages are particularly disfavored because they are highly nonoptimal if the goal is to distribute information content approximately evenly
throughout a sentence, and that the rest of the observed word-order distribution
is at least partially explainable in terms of UID. We support our theoretical analysis with data from child-directed speech and experimental work.
1
Introduction
Many of the world?s languages are sensitive to word order. In these languages, the order in which
words are spoken conveys a great deal of the sentence?s meaning. The classic English example is
the distinction between ?dog bites man? and ?man bites dog?, which differ in terms of who is biting
whom. The so-called ?basic? word order of a language is defined according to the order of three
of the principal components of basic transitive sentences: subject (S), verb (V) and object (O). This
results in six logically distinct word orders: SOV, SVO, VSO, VOS, OVS and OSV (e.g., English
has SVO basic word order). Curiously, the world?s order-sensitive languages make use of these
six possibilities in an uneven fashion. According to a survey of 402 languages [17], the majority
of languages are either SOV (44.78%) or SVO (41.79%). VSO (9.20%) is much less frequent but
still significant, and very few languages make use of VOS (2.99%), OVS (1.24%) or OSV (0.00%)
as their basic word order. Broadly speaking, the basic pattern appears to be (SOV, SVO) > VSO
> (VOS, OVS) > OSV. This non-uniformity is a striking empirical finding that demands some
explanation. Unfortunately, most of the explanations that have been offered are either proximate
explanations that simply shift the question, or else are circular.
One of the most straightforward explanations is that the observed word order frequencies may be the
consequence of genetically encoded biases toward particular orders, as part of the universal grammar
hypothesis; this possibility is considered in [4]. However, this can be only a proximate explanation:
why does our genetic endowment happen to bias us in the particular way that it does? And if there
is nothing special about the observed distribution ? if it is not an adaption to the environment ?
why have thousands of years of adaption and genetic drift not blurred it into something closer to
uniformity?
A similar objection can be made against the proposal that all languages which are alive today descend from a single common ancestor, and that this proto?language used SOV word order [8], ex1
plaining the observation that SOV is the most common word order today. If there is nothing special
about SOV, why has random drift (this time in language evolution, not human genetic evolution)
not more significantly changed the word order distribution from its ancient form? Furthermore, it
is clear that ancient SOV languages must have changed into SVO languages much more frequently
into than, say, VOS languages in order to arrive at the current state of affairs. Common descent from
SOV cannot explain this by itself.
Another explanation seeks to derive word order frequencies as a consequence of more fundamental
or general linguistic principles. Three such principles are presented in [17]: the ?theme-first principle?, ?verb-object bonding? and the ?animate-first principle?. These principles do an excellent job
of explaining the observed word order frequencies; the frequency of each word order is proportional
to the number of the principles which that word order permits to be realized (all three principles are
realized in SOV and SVO, two are realized in VSO, one in VOS and OVS, and none in OSV). However, these principles are primarily motivated by the fact that a large body of cross-linguistic data is
consistent with them. Without a deeper justification, they are, in essence, a useful recharacterization
of the data; to offer them as explanations of patterns in that data is circular. In other words, it is not
clear why these principles work.
In this paper we propose a novel explanation for the observed distribution of word orders across
languages, based on uniform information density (UID). The UID hypothesis [13, 10] suggests that
language producers unconsciously endeavor to keep the rate of information transmission as close to
constant as possible when speaking. We use the term ?information? here in its information-theoretic
sense of reduction of entropy (uncertainty) of a random variable (where the random variable is the
underlying meaning of an utterance). Conveying information via speech with a uniform information
density represents an optimal solution to the computational problem of conveying information over
a noisy channel in a short time with low probability of error. A listener?s comprehension of an
utterance is made more difficult if a syllable, word or clause which carries a lot of information is lost
due to ambient noise or problems with articulation or perception. The most error resistant strategy is
therefore to convey minimal information with each unit of speech. Unfortunately, this leads to other
problems ? namely, that it will take excessive time to convey any meaningful quantity of information.
The best trade off between time efficiency and error resistance is to spread information content as
equally as possible across units and have each unit carry as much information as it can without
exceeding the threshold for error correctability (the channel capacity). Also, UID minimizes the
difficulty involved in online sentence processing, assuming that the difficulty of processing a speech
unit increases superlinearly with that unit?s surprisal [13].
The UID hypothesis is supported by a range of empirical evidence. It suggests that speakers should
attempt to slow down the rate at which information is conveyed when unexpected, high entropy
content is being discussed, and increase the rate when predictable, low entropy content is being
discussed. This prediction is supported by findings indicating that certain classes of words [1] and
syllables [3] are spoken more slowly in unexpected contexts. In addition, analysis of corpus data
suggests that the entropy of sentences taken out of context is higher for sentences further into a body
of text [7, 12]. Furthermore, the use of both optional contractions (e.g., ?you are? vs. ?you?re?)
[2] and optional function words in relative clauses (e.g., ?how big is the house that you live in??
vs. ?how big is the house you live in??) [14, 11] appears to be affected by information density
considerations, with contractions used less often when the relative clause is unexpected.
We propose that the basic word order of a language influences the average uniformity of information
density for sentences in that language, and that a preference for languages that are closer to the UID
ideal can explain some of the structure in the observed distribution over basic word orders. The
layout of the rest of the paper is as follows. In Section 2 we describe the underlying conceptual
model and terminology using a simple illustrative example. In Section 3,
2
Development of hypothesis and illustrative examples
This work is based on a simple probabilistic model of language production. We assume that languages are grounded in a world, consisting of objects (elements of a set O) and actions (which are
binary relations between objects, and elements of a set R, such that if r ? R then r ? O ? O). An
event in the world is a triple consisting of a relation r and two objects o1 , o2 and is written (o1 , r, o2 ).
Events in the world are generated probabilistically in a sequential fashion, as independent identically
2
distributed draws from a probability distribution P over the set of events O ? R ? O. We assume
that a language consists of nouns (each of which corresponds to a unique object) and verbs (each of
which corresponds to a unique action). Utterances are generated from events by combining the three
relevant words in one of the six possible orders. Each utterance is therefore three words long (there
are no function words in the model). This defines a probabilistic generative model for three-word
utterances.
To make this idea more concrete, we construct a simple toy world consisting of thirteen objects
and two relations. Five of the objects represent individual people (A LICE , B OB , E VE , M ALLORY,
T RENT) and the other eight represent items which are either food (A PPLE , B READ , C AKE , R ICE)
or drink (C OFFEE , C OLA , J UICE , WATER). The two relations are E AT and D RINK, so that the
events in this world represent particular people eating or drinking particular items (e.g. (A LICE ,
D RINK , C OFFEE)). Impossible events (e.g., (C OFFEE , D RINK , A LICE)) are given zero probability
in the event distribution P . A diagrammatic representation of all the non-zero probabilities of P is
available in the supplementary material, but the salient features of the example are as follows: each
of the five people eat and drink equally often, and equally as often as each other; nobody drinks
foods or eats drinks; and each person has their own particular idiosyncratic distribution over which
foods they prefer to eat and which drinks they prefer to drink.
What is the link between word order and information density in this toy world? Consider a listener
who learns about events in this toy world by hearing three-word utterances (such as ?Alice eats
apples? or ?Bob drinks coffee?), one word at a time. Until they have heard all three words in the
utterance, there will generally remain some degree of uncertainty about what the event is, with
the uncertainty decreasing as each word is heard. Formally, the event underlying an utterance is a
random variable, and the listener?s uncertainty is represented by the entropy of that random variable.
Before any words are spoken, the observer?s uncertainty is given by the entropy of the event distribution (which we refer to as the base entropy and denote H0 ):
X
H0 = H(P ) =
?P (o1 , r, o2 ) log(P (o1 , r, o2 )),
(1)
(o1 ,r,o2 )
where the sum is taken over all possible events in the world. After the first word, the observer?s
uncertainty about the event is reduced, and now corresponds to the entropy of one of the conditional distributions, P (o1 , o2 |r), P (r, o2 |o1 ) or P (o1 , r|o2 ), depending on whether the first word
corresponds to the action (VSO or VOS word order), the person (SVO or SOV word order) or
the food/drink (OVS or OSV word order). Similarly, after the second word, the uncertainty is the
entropy of one of the conditional distributions P (o2 |o1 , r), P (o1 |r, o2 ) or P (r|o1 , o2 ), depending
again on word order. After the third word the event is uniquely determined and the entropy is zero.
This means that for any particular event, the six different choices of word order each define a different monotonically decreasing sequence of intermediate entropies, with the first point in the sequence
always being H0 and the final point always being zero. Equivalently, the different choices of word
order result in different distributions of the total information content of a sentence amongst its constituent words. We call sequences of entropies (H0 , H1 , H2 , 0) entropy trajectories, and sequences
of information (I1 = H0 ? H1 , I2 = H2 ? H1 , I3 = H2 ) information profiles. Figure 1 shows the
entropy trajectories and corresponding information profiles for the event (A LICE , E AT A PPLE) in
our toy world, for three different word orders. The figure demonstrates the correspondence between
trajectories and profiles, as well as the dependency of both on word order. Note that in the figure we
have normalized entropies and informations, so that H0 = 1.
If we make the simplifying assumption that all words are of equal length1 , the UID hypothesis
suggests that the ideal shape of an entropy trajectory is a perfectly straight line from the initial
base entropy to the eventual zero entropy, or, equivalently, that the ideal shape of an information
profile is for each word to convey one third of the total information. Figure 1 demonstrates that
some trajectories are better realizations of this ideal than others. For example, in our toy world the
entropy trajectories for the word orders SOV, OSV and OVS (two of which are pictured in Figure 1)
are perfectly horizontal at various points (equivalently, some words carry zero information) because
1
Obviously this is not true. However, in order for this simplifying assumption to skew our results, the length
of nouns would need to vary systematically depending on the relative frequency with which the nouns were the
subject and orbject of sentences, which is highly unlikely to be the case.
3
Figure 1: The entropy trajectories and corresponding information profiles for the event (A LICE ,
E AT, A PPLE) in our toy world, for three different word orders. Dotted lines indicate the ideal
trajectory and profile according to the UID hypothesis. Observe that word orders in which the
object preceeds the verb have significant ?troughs? in their information profiles, making them far
from ideal. This pattern arises because of the event structure in our toy world; our question is what
word orders are optimal given real-world event structure.
knowledge of the object in this world uniquely determines the verb (since foods are strictly eaten
and drinks are strictly drunk). Thus, any word order that places O before V renders the verb entirely
uninformative, in significant conflict with the UID hypothesis.
To formalize the intuitive notion of distance from the UID ideal we define the UID deviation score
D(I) of any given information profile I = (I1 , I2 , I3 ). D(I) is given by the formula:
3
3 X Ii
1
D(I) =
(2)
? .
4 i=1 H0
3
It is easy to verify that the UID ideal information profile, with I1 = I2 = I3 , has a deviation score
of zero, and the least-ideal profile, in which all information is conveyed by a single word, has a
deviation score of 1.
The UID deviation score allow us, for each event in the model world, to produce both an ordering of
the word orders from ?most UID-like? to ?least UID-like?, as well as a quantitative measure of the
extent to which each word order approaches uniform information density. We can straightforwardly
calculate a mean deviation score for the entire model world, by summing the scores for each individual event and weighting by that event?s probability according to the event distribution P . This
lets us assess the extent to which each word order is UID-suited to a given world. For our toy world,
the ordering of word orders from lowest to highest mean deviation score is: VSO, VOS, SVO, OVS,
SOV, OSV.
Of course, our toy world is a highly contrived example, and so there is no reason to expect it to
produce the observed cross-linguistic distribution of word orders. This is because we constructed the
artificial P distribution to be pedagogically useful, not to reflect the real-world distribution of events.
The toy example is intended only as a demonstration of the core idea underlying our hypothesis: that
different choices of word order map the same probabilistic structure of the world (P ) onto different
information profiles. Since these profiles have differing levels of information density uniformity, the
UID hypothesis implies a preference ranking of word orders.
What are the mean deviation scores when the event distribution P more accurately approximates reality? Does the preferred ranking of word orders implied by the UID hypothesis reflect the observed
cross-linguistic distribution of word orders? We investigate these questions in the rest of the paper.
3
Corpus analysis
Our work above implies that a particular word ordering in a language is good to the extent that it
produces minimal UID deviation scores for events in the world. Accordingly, it would be ideal to
assess the optimality of a particular word ordering with respect to the true distribution over ?psychologically meaningful? events in the everyday environment. Although we do not have access to
this distribution, we may be able to construct sensible approximations. One option is to assume that
spontaneous speech is informative about event probabilities ? that the probability with which speakers discuss an event is roughly proportional to the actual frequency or psychological importance of
that event. Guided by this assumption, in this section we estimate P on the basis of child-directed
4
Figure 2: Distribution of information across words for the world instantiated from an English corpus
Figure 3: Distribution of information across words for the world instantiated from a Japanese corpus
speech corpora in two languages, English and Japanese. We use child-directed speech even though
the UID hypothesis applies equally well to adult speakers for two reasons: because child-directed
speech is more amenable to the particular analysis we provide (which requires relatively simple sentences), and because children learn their language?s basic word very quickly and accurately [9, 5],
suggesting that any aspect of primary linguistic data relevant to word order learning must be present
in simple child-directed speech.
As our source of English data, we take the ?Adam? transcripts from the Brown corpus [5] in the
CHILDES database [15]. From this data we extract all of the child-directed utterances involving a
random subset of the singly transitive verbs in the corpus (a total of 544 utterances). The subjects
and objects of these utterances define the set O and the verbs define the set R. In our analysis, we
treat each utterance as a distinct event, setting the probability of an event in P to be proportional to
the number of times the corresponding utterance occurs in the corpus. Thus the event distribution
P is a measure of the probability that speakers of the language choose to discuss events (rather than
their frequency in the real world). For simplicity, we ignore adjectives, plurality, tense, and so forth:
for instance, the utterances ?the black cat sat on the mat? and ?the cats are sitting on the soft mat?
would both be mapped to the same event, (C AT, S IT, M AT). Utterances involving pronouns which
were considered likely to refer to a wide range of objects across the corpus (such as ?it?, ?this?,
etc.) were discarded, while those involving pronouns which in the context of the discourse could be
expected to refer to a small set of objects (such as ?he? or ?she?) were retained.
Figure 2 shows the distribution of information amongst words (summarizing all of the model world?s
information profiles) for all six word orders according to the event distribution P derived from the
?Adam? transcripts. The mean deviation scores for the six word orders are (from lowest to highest)
VSO (0.38), SVO (0.41), VOS (0.48), SOV (0.64), OSV (0.78), OVS (0.79).
To guard against the possibility that these results are a by-product of the fact that English has basic
word order SVO, we repeat the method discussed above using utterances involving singly transitive verbs taken from the ?Asato?, ?Nanami? and ?Tomito? transcripts in the MiiPro corpus of the
CHILDES database, which is in Japanese (basic order SOV). From these transcripts we retreive
134 utterances. The distribution of information amongst words for the event distribution derived
from the Japanese transcripts are shown in Figure 3. The mean deviation scores are SVO (0.66),
VSO (0.71), SOV (0.72), VOS (0.72), OSV (0.82), OVS (0.83). This is not precisely the ranking
recovered from the English corpus, but there are clear similarities, which we discuss later.
4
Experiment
In the previous analyses, the event distribution P was estimated on the basis of linguistic input.
While this is sensible in many respects, it blurs the distinction between the frequency of events in
5
Table 1: Objects and relations in our experiment?s model world. Asterisks denote ?actor? status.
Objects
Relations
A PPLE , B EAR *, B ED , B ELLY-B UTTON , B LANKET, B UNNY *, C AT *,
C HAIR , C HEESE , C OOKIE , C OW *, C RACKER , C UP, D IAPER , D OOR ,
D UCK *, E AR , F ISH *, F LOWER , F OOT *, H AIR , H AND *, H AT, H ORSE *,
K EY *, L IGHT, M ILK , M OUTH *, N OSE *, O UTSIDE , P ERSON *, P IG *,
S POON *, TV, T ELEPHONE , T OE *, T OOTH *, T REE , WATER
B ITE , D RINK , E AT, H ELP, H UG , K ISS , O PEN , R EAD , S EE , S WING
Table 2: Most and least probable completions of event frames according to experimentally determined event distribution P
Event frame
Most probable completion Least probable completion
P ERSON E AT
A PPLE
D OOR
C AT D RINK
M ILK
B ED
P ERSON
C AT
H ELP
E AT
E AT F LOWER
C OW
T OOTH
the world and the frequency with which speakers choose to discuss those events. In one version of
the UID hypothesis, we would expect that word order would be optimal with respect to the latter,
?speaker-weighted? frequencies. We refer to this as the ?weak? hypothesis since it only requires
that a language be ?internally? consistent, insofar as the word order is expected to be optimal with
respect to the topics spoken about. However, there is also a ?strong? version of the hypothesis, which
states that the language must also be optimal with respect to the perceived frequencies of events in
the external world. To test the strong version of the UID word order hypothesis, it is not valid to rely
on corpus analysis. Accordingly, in this section we present the results of an experiment designed to
measure people?s perceptions regarding which events are most likely.
Our experiment consists of three parts. In the first part we identify the objects O and relations R for
the model world based on the first words learned by English-speaking children, on the assumption
that those words would reflect the objects and relations that are highly salient. The MacArthur
Communicative Development Inventory [6] provides a list of those words, along with norms for
when they are learned. We identified all of the words that were either singly-transitive verbs or
nouns that were potential subjects or objects for these verbs, yielding 324 nouns and 81 verbs. The
only transformation we made to this list was to replace all nouns that referred to specific people
(e.g., ?Mommy? or ?Grandpa?) with a single noun ?Person?. In order to limit the total number of
possible events to a number tractable for parts two and three of the experiment, we then identified
the 40 objects and 10 relations2 uttered by the highest percentage of children below the age of 16
months; these comprise the sets O and R. The objects and relations are shown in Table 1.
The 40 objects and 10 relations in our world define a total of 16,000 events, but the overwhelming
majority of the events in the world are physically impossible (e.g., (T ELEVISION , D RINK , C AT))
and thus should receive a probability of 0. The goal of the second part of the experiment was
to identify these impossible events. The first step was to identify the subset of objects capable
of acting as actors, indicated with asterisks in Table 1. We set the probability of events whose
subjects were non-actors to zero, leaving 6,800 events. To identify which of these events were still
impossible, we had two participants3 judge the possibility or impossibility of each, obtaining two
judgements for each event. When both judges agreed that an event was impossible, its probability
was set to zero; if they disagreed, we solicited a third judgement and set the event probability to
zero if the majority agreed that it was impossible. At the end of this process, a total of 2,536 events
remained. Subsequent analysis revealed that many participants had interpreted the noun O UTSIDE
as an adverb in events such as (B EAR , E AT, O UTSIDE), leading to events which should properly
2
The ratio of 4 objects for every 1 relation was chosen to reflect the proportion of each reported in [6].
This experiment involved 11,839 binary decisions in the second part and 35,280 binary choices in the third
part. In order to collect such a large quantity of data in a reasonable time period, we used Amazon.com?s ?Mechanical Turk? web application to distribute the judgement tasks to a large international pool of participants,
who completed the tasks using their web browsers in exchange for small payments of cash or Amazon.com
store credit. A total of 8,956 participants contributed in total, presumably but not verifiably representing a
broad range of nationalities, ages, levels of education, etc.
3
6
Figure 4: Distribution of information across words for the world instantiated from the experimentally
produced event distribution.
have been considered impossible being classed as possible; we therefore set all events involving the
noun O UTSIDE which did not involve the verb S EE to also be impossible. This reduced the number
of events to 2,352.
In the final part of the experiment, we derived a probability distribution over the remaining, possible events using the responses of participants to a large number of judgement tasks. In each task,
participants were presented with a pair of events and asked to indicate which of the two events they
considered most probable. Full details of this part of the experiment are available in the supplementary material. Table 2 shows the most and least probable completions of several event frames
according to the distribution P produced by our experiment. The completions are in line with common sense, although some of the least probable completions are in fact physically impossible (e.g.
(C AT, D RINK , B ED)), suggesting that the filtering in part two was not quite perfect.
We now analyse the P distribution we have estimated. The distribution of information among words
is shown in Figure 4 and the mean deviation scores are VSO (0.17), SVO (0.18), VOS (0.20), SOV
(0.23), OVS (0.23), OVS (0.24).
5
Discussion
On the basis of two corpora of child-directed speech, in different languages, and an experiment,
we have derived three different event distributions which are assumed to represent the important
features of the probabilistic structure of the physical world. From these different distributions we
derive three different preferential rankings of word orders according to the UID hypothesis. From
the English corpus, we get VSO > SVO > VOS > SOV > OSV > OVS; from the Japanese corpus,
we get SVO > VSO > SOV = VOS > OSV > OVS; from the experiment, we get VSO > SVO
> VOS > SOV = OVS > OSV. While these three rankings are not in perfect agreement, there is
some degree of common structure. All three rankings are compatible with the partial ranking (SVO,
VSO) > (SOV, VOS) > (OVS, OSV). How does this compare with the empirically observed ranking
(SOV, SVO) > VSO > (VOS, OVS) > OVS?
The strongest empirical regularity regarding word order frequency - that object-first word orders
are extremely rare - coincides with our most robust finding: object-first word orders lead to the
least uniform information density in all three of our estimated event distributions. These orders
together account for less than 2% of the world?s word order-sensitive languages, and in all our
models have deviation scores that are notably greater than the deviation scores of the other word
orders. What is the reason for this effect? As the profiles in Figures 2, 3 and 4 indicate, objectfirst word orders deviate from uniformity because the first word (the object) carries disproportionate
amount of information. This seems to occur because many objects are predictive of very few subjects
or verbs. For instance, hearing the object word ?water? implies only a few possibilities for verbs
(e.g., ?drink?), which in turn restricts the subjects (e.g. to living things). By contrast, hearing the
verb ?drink? implies many possibilities for objects (e.g., ?water?, ?coffee?, ?cola?, ?juice?, etc.).
There are further points of agreement between the rankings produced by our analyses and the empirical data. All three of our estimated event distributions lead to word order rankings in which VSO
is ranked more highly than VOS, which is in agreement with the data. In fact, in all of our rankings,
SVO and VSO occupy the two highest positions (though their relative position varies), consistent
with the fact that these word orders occupy the second and third highest positions in the empirical
7
ranking respectively, and are two of the only three word orders which appear with any appreciable
frequency.
The greatest apparent discrepancy between the rankings produced by our analyses and the empirical
data is the fact that SOV word order, which occurs frequently in real languages, appears to be only
moderately compatible with the UID hypothesis. One possible explanation for this is that some other
factor besides UID-compatibility has influenced the distribution of word orders, and this factor may
favour SOV sufficiently to lift it to the top or equal-top place in a combined ranking. Another
possibility is to combine the idea we saw earlier of common descent from SOV with the idea that
word order change away from SOV is influenced by the UID hypothesis. This explanation could
also lift SOV word order to a higher position in the word order ranking.
To what extent are our rankings consistent with the the theme-first principle (TFP), verb-object
bonding (VOB) and animate-first principle (AFP) principles of [17], which perfectly explain the
empirical ranking? The three orders that permit the greatest realization of the TFP and AFP principles are SOV, SVO, and VSO. We note that two of these orders, SVO and VSO, are consistently
ranked highest in our results, and the third, SOV, is typically not too far behind. In fact, with the
event distribution derived from the Japanese corpus, SOV is in equal third place with VOS. This
suggests that perhaps the UID word order hypothesis is unable to provide a complete explanation of
all of the word order rankings, but is able provide a sensible justification for the TFP and/or AFP.
A full consideration of the effects of word order on information density should not limit itself only
to the considerations made in this paper, and so our results here must be considered only preliminary. For instance, we have given no consideration to sentences involving intransitive verbs (SV
sentences), sentences without an explicit subject (VO sentences), or sentences involving ditransitive
verbs (SVO1 O2 sentences). A word order optimal for one of these sentence classes may not be optimal for others, so that the question of how to meaningfully combine the results of separate analyses
becomes a central challenge in such an extended study. Furthermore, a number of other word order
parameters beyond basic word order may have a significant effect on information density, such as
whether a language uses prepositions or postpositions, or the relative position of nouns and adjectives or nouns and relative clauses. For instance, consider the order of nouns and adjectives. The
utterance ?I ate the...? can be completed by any edible object, but ?I ate the red...? only by those
objects which are both edible and red. Thus, adjectives which preceed unexpected nouns can be
used to ?smooth out? what might otherwise be sudden spikes in information density. Adjectives
which come after nouns cannot do this. Several correlations and rules are known to exist between
various word order parameters, and it is possible that these effects may be able to be explained in
terms of information density.
On the whole, while the word order rankings recovered from our analyses do not perfectly match the
empirically observed ranking, they are in much better agreement with observation than one would
expect if a preference for UID had played no role whatsoever. Furthermore, the particular pattern
of what our rankings do and do not explain, and the ways our two rankings differ, are consistent
with a weaker hypothesis that UID may be able to provide a principled cognitive explanation for the
theme-first and/or animate-first principles of earlier work. It is possible that the discrepancies which
do exist between our results and the empirical distribution could be explained by a combination of
more and richer data and consideration of additional word order parameters. It is also the case that
even if information theoretic concerns have exerted a significant influence on language evolution,
there is no reason to expect them to have been the only such influence: genetic and social factors as
well additional cognitive constraints may have played some role as well, so that the UID hypothesis
alone need not explain all the observed regularity. Regardless, we have shown that informationtheoretic principles can explain several aspects of the empirical distribution of word orders, and
most robustly explains the most pronounced of these aspects: the nearly complete lack of objectfirst languages. Moreover, they do so on independently justified, general cognitive principles, and
as such represent a significant advance in our understanding of word order.
6
Acknowledgements
DJN was supported by an Australian Research Fellowship (ARC grant DP-0773794). Kirsty Maurits
assisted significantly in the translation of utterances from the Japanese transcripts.
8
References
[1] Alan Bell, Daniel Jurafsky, Eric Fosler lussier, Cynthia Girand, Michelle Gregory, and Daniel
Gildea. Effects of disfluencies, predictability, and utterance position on word form variation in
English conversation. Journal of the Acoustical Society of America, 113(2), 2003.
[2] Austin F. Frank and T. Florian Jaeger. Speaking Rationally: Uniform Information Density as
an Optimal Strategy for Language Production. In Proceedings of the 30th Annual Meeting of
the Cognitive Science Society, pages 933?938, 2008.
[3] M. Aylett and A. Turk. The Smooth Signal Redundancy Hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous
speech. Language and Speech, 47:31?56, 2004.
[4] Ted Briscoe. Grammatical Acquisition: Inductive Bias and Coevolution of Language and the
Language Acquisition Device. Language, 76(2):245?296, 2000.
[5] R. Brown. A first language. Harvard University Press, Cambridge, MA, 1973.
[6] Larry Fenson, Philip S. Dale, J. Steven Reznick, Elizabeth Bates, Donna J. Thal, and Stephen J.
Pethick. Variability in Early Communicative Development. Monographs of the Society for
Research in Child Development, 59, 1994.
[7] D. Genzel and E. Charniak. Entropy rate constancy in text. In In Proceedings of ACL, 2002.
[8] Talmy Giv?on. On Understanding Grammar. Academic Press, New York, NY, 1979.
[9] R. Hirsh Pasek, K.and Golinkoff. The origins of grammar: Evidence from early language
comprehension. MIT Press, Cambridge, MA, 1996.
[10] T. F. Jaeger. Redundancy and syntactic reduction in spontaneous speech. Unpublished doctoral
dissertation, Stanford University, 2006.
[11] T. Florian Jaeger. Redundancy and reduction: Speakers manage syntactic information density.
Cognitive Psychology, 61:23?62, 2010.
[12] F. Keller. The entropy rate principle as a predictor of processing effort: An evaluation against
eye-tracking data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 317?324, 2004.
[13] R. Levy. Probabilistic Models of Word Order and Syntactic Discontinuity. PhD thesis, Stanford
University, 2005.
[14] R. Levy and T. F. Jaeger. Speakers optimize information density through syntactic reduction.
In Advances in Neural Information Processing Systems, pages 849?856, 2007.
[15] B. MacWhinney. The CHILDES project : Tools for analyzing talk. Lawrence Erlbaum Associates, Mahwah, NJ, 3rd edition, 2000.
[16] B. Miller, P. Hemmer, M. Steyvers, and M.D. Lee. The wisdom of crowds in rank ordering
problems. In A. Howesa, D. Peebles, and R. Cooper, editors, 9th International Conference on
Cognitive Modeling, 2009.
[17] Russel S. Tomlin. Basic word order: functional principles. Croom Helm, 1986.
9
| 4085 |@word version:3 judgement:4 proportion:1 norm:1 elly:1 seems:1 cola:1 seek:1 contraction:2 simplifying:2 prominence:1 carry:4 reduction:4 initial:1 score:14 charniak:1 daniel:4 genetic:4 o2:12 current:1 recovered:2 com:2 must:4 written:1 subsequent:1 happen:1 informative:1 blur:1 shape:2 designed:1 childes:3 v:2 alone:1 generative:1 device:1 item:2 accordingly:2 affair:1 pasek:1 short:1 core:1 dissertation:1 sudden:1 provides:1 preference:3 five:2 guard:1 constructed:1 along:1 consists:2 combine:2 notably:1 expected:2 roughly:1 frequently:2 decreasing:2 food:5 actual:1 overwhelming:1 becomes:1 project:1 underlying:4 moreover:1 lowest:2 what:8 superlinearly:1 minimizes:1 interpreted:1 spoken:4 finding:3 differing:1 transformation:1 whatsoever:1 nj:1 quantitative:1 every:1 demonstrates:2 unit:5 internally:1 grant:1 appear:1 ice:1 before:2 disfavored:1 hirsh:1 treat:1 limit:2 consequence:2 analyzing:1 ree:1 approximately:1 black:1 might:1 acl:1 au:1 doctoral:1 suggests:5 luke:2 alice:1 collect:1 jurafsky:1 range:3 ola:1 directed:7 unique:2 lost:1 ead:1 empirical:10 universal:1 oot:1 significantly:2 bell:1 word:118 suggest:1 oor:2 get:3 cannot:2 close:1 onto:1 context:3 live:2 influence:3 impossible:9 optimize:1 map:1 uttered:1 straightforward:1 layout:1 regardless:1 independently:1 duration:1 survey:1 keller:1 simplicity:1 amazon:2 amy:2 rule:1 steyvers:1 classic:1 sov:29 notion:1 variation:2 justification:2 spontaneous:3 today:2 us:1 hypothesis:22 origin:1 agreement:4 harvard:1 element:2 associate:1 particularly:1 database:2 observed:13 role:2 steven:1 constancy:1 descend:1 thousand:1 calculate:1 oe:1 ordering:5 trade:1 highest:6 principled:1 monograph:1 environment:2 predictable:1 moderately:1 asked:1 golinkoff:1 donna:1 ov:17 uniformity:5 animate:3 predictive:1 efficiency:1 eric:1 basis:3 represented:1 listener:3 various:2 cat:2 america:1 talk:1 distinct:2 instantiated:3 describe:1 perfors:2 prosodic:1 artificial:1 lift:2 h0:7 crowd:1 whose:1 encoded:1 widely:1 supplementary:2 quite:1 say:1 apparent:1 otherwise:1 stanford:2 grammar:3 richer:1 browser:1 tomlin:1 syntactic:4 analyse:1 itself:2 noisy:1 final:2 online:1 obviously:1 sequence:4 propose:2 product:1 biting:1 frequent:1 relevant:2 combining:1 realization:2 pronoun:2 poon:1 forth:1 intuitive:1 mommy:1 everyday:1 pronounced:1 constituent:1 regularity:3 transmission:1 contrived:1 jaeger:4 produce:3 adam:2 perfect:2 object:33 derive:2 depending:3 ish:1 completion:6 school:1 thal:1 transcript:6 job:1 strong:2 disproportionate:1 indicate:3 implies:4 judge:2 differ:2 australian:1 guided:1 come:1 human:1 australia:1 larry:1 material:2 education:1 explains:1 exchange:1 disfluency:1 plurality:1 preliminary:1 probable:6 comprehension:2 strictly:2 drinking:1 assisted:1 sufficiently:1 considered:5 credit:1 great:1 presumably:1 lawrence:1 vary:2 early:2 perceived:1 communicative:2 sensitive:3 saw:1 tool:1 weighted:1 mit:1 eats:2 always:2 i3:3 rather:1 cash:1 eating:1 louse:5 probabilistically:1 linguistic:6 derived:5 she:1 properly:1 consistently:1 rank:1 logically:1 impossibility:1 contrast:1 sense:2 summarizing:1 unlikely:1 entire:1 typically:1 eaten:1 relation:11 ancestor:1 i1:3 compatibility:1 among:1 development:4 noun:14 special:2 equal:3 construct:2 exerted:1 comprise:1 ted:1 represents:1 broad:1 excessive:1 nearly:1 discrepancy:2 others:4 utton:1 few:3 primarily:1 producer:1 ve:1 individual:2 intended:1 consisting:3 attempt:1 unconsciously:1 highly:5 possibility:7 circular:2 investigate:1 intransitive:1 evaluation:1 yielding:1 behind:1 amenable:1 ambient:1 closer:2 capable:1 preferential:1 partial:1 solicited:1 ancient:2 re:1 theoretical:1 minimal:2 psychological:1 instance:4 soft:1 earlier:2 modeling:1 ar:1 hearing:3 deviation:13 subset:2 rare:1 uniform:7 predictor:1 erlbaum:1 too:1 reported:1 straightforwardly:1 dependency:1 varies:1 sv:1 gregory:1 combined:1 person:3 density:17 fundamental:1 international:2 probabilistic:5 off:1 lee:1 pool:1 together:1 quickly:1 concrete:1 thesis:1 again:1 reflect:4 ear:2 central:1 choose:2 slowly:1 preceed:1 manage:1 external:1 cognitive:6 wing:1 leading:1 giv:1 toy:10 account:2 distribute:2 suggesting:2 potential:1 blurred:1 trough:1 ranking:22 later:1 h1:3 lot:1 observer:2 red:2 option:1 participant:5 ass:2 air:1 gildea:1 who:3 afp:3 sitting:1 vso:18 conveying:2 identify:4 miller:1 wisdom:1 weak:1 accurately:2 produced:4 none:1 bates:1 trajectory:8 apple:1 bob:1 straight:1 explain:6 strongest:1 influenced:2 rink:7 ed:3 against:3 acquisition:2 frequency:13 involved:2 turk:2 conveys:1 knowledge:1 conversation:1 formalize:1 agreed:2 appears:3 higher:2 response:1 though:2 furthermore:4 until:1 correlation:1 horizontal:1 web:2 lack:1 elp:2 defines:1 indicated:1 perhaps:1 effect:5 concept:1 normalized:1 true:2 verify:1 evolution:3 brown:2 tense:1 inductive:1 read:1 i2:3 deal:1 ex1:1 uniquely:2 essence:1 speaker:8 illustrative:2 coincides:1 theoretic:2 complete:2 vo:1 meaning:2 consideration:5 novel:1 macarthur:1 common:8 juice:1 ug:1 functional:2 osv:13 clause:4 physical:1 empirically:2 discussed:3 he:1 approximates:1 significant:6 refer:4 cambridge:2 ooth:2 rd:1 nationality:1 similarly:1 vos:17 language:44 nobody:1 had:3 resistant:1 access:1 similarity:1 actor:3 etc:3 base:2 something:1 own:1 adverb:1 store:1 certain:1 binary:3 meeting:1 genzel:1 greater:1 additional:2 florian:2 ey:1 recognized:1 ight:1 monotonically:1 signal:1 period:1 ii:1 full:2 living:1 stephen:1 smooth:2 alan:1 match:1 academic:1 offer:2 cross:3 long:1 proximate:2 equally:4 prediction:1 involving:7 basic:13 hair:1 pple:5 physically:2 psychologically:1 grounded:1 represent:5 proposal:1 addition:1 uninformative:1 receive:1 justified:1 objection:1 fellowship:1 else:1 source:1 leaving:1 rest:3 navarro:2 south:1 subject:8 svo:20 thing:1 meaningfully:1 call:1 ee:2 ideal:10 intermediate:1 revealed:1 identically:1 easy:1 insofar:1 psychology:2 coevolution:1 perfectly:4 identified:2 idea:4 regarding:2 fosler:1 shift:1 edible:2 favour:1 whether:2 motivated:1 six:6 curiously:1 effort:1 explainable:1 render:1 resistance:1 speech:13 speaking:4 york:1 action:3 useful:2 heard:2 clear:3 generally:1 involve:1 singly:3 amount:1 reduced:2 occupy:2 percentage:1 restricts:1 canonical:1 exist:2 dotted:1 estimated:4 broadly:1 mat:2 affected:1 redundancy:4 salient:2 terminology:1 threshold:1 uid:31 year:1 sum:1 uncertainty:7 you:4 striking:1 arrive:1 throughout:1 place:3 reasonable:1 draw:1 ob:1 prefer:2 decision:1 entirely:1 drink:11 syllable:2 played:2 correspondence:1 ilk:2 annual:1 occur:1 precisely:1 alive:1 constraint:1 aspect:4 preceeds:1 optimality:1 extremely:1 macwhinney:1 eat:2 relatively:1 tv:1 according:8 combination:1 across:7 remain:1 ate:2 elizabeth:1 making:1 explained:3 taken:3 payment:1 skew:1 discus:4 turn:1 tractable:1 end:1 available:2 permit:2 eight:1 observe:1 away:1 robustly:1 top:2 remaining:1 completed:2 coffee:2 society:3 implied:1 hemmer:1 question:4 realized:3 quantity:2 occurs:2 strategy:2 primary:1 spike:1 amongst:3 ow:2 dp:1 distance:1 separate:1 link:1 mapped:1 grandpa:1 philip:1 majority:3 capacity:1 sensible:3 evenly:1 topic:1 whom:1 acoustical:1 extent:4 aylett:1 toward:1 water:4 reason:4 assuming:1 length:1 o1:11 retained:1 besides:1 relationship:1 ratio:1 demonstration:1 equivalently:3 difficult:1 unfortunately:2 idiosyncratic:1 thirteen:1 classed:1 frank:1 ake:1 nonoptimal:1 contributed:1 observation:2 discarded:1 drunk:1 arc:1 descent:2 optional:2 extended:1 variability:1 frame:3 verb:19 drift:2 dog:2 namely:1 mechanical:1 pair:1 sentence:17 unpublished:1 conflict:1 bonding:2 distinction:2 learned:2 discontinuity:1 adult:1 able:4 beyond:1 below:1 pattern:4 perception:2 articulation:1 challenge:1 genetically:1 bite:2 adjective:5 including:1 explanation:14 surprisal:1 greatest:2 event:69 difficulty:2 rely:1 ranked:2 natural:1 pictured:1 representing:1 djn:1 eye:1 transitive:4 extract:1 utterance:20 text:2 deviate:1 understanding:2 acknowledgement:1 relative:6 expect:4 proportional:3 filtering:1 triple:1 asterisk:2 h2:3 age:2 degree:2 conveyed:2 offered:1 consistent:5 principle:18 rationally:1 editor:1 systematically:1 endowment:1 translation:1 austin:1 production:2 tfp:3 changed:2 course:1 supported:3 repeat:1 compatible:2 english:10 preposition:1 bias:3 allow:1 deeper:1 weaker:1 explaining:1 wide:1 michelle:1 distributed:1 grammatical:1 world:37 valid:1 dale:1 made:4 ig:1 far:2 social:1 informationtheoretic:2 preferred:1 ignore:1 status:1 keep:1 sat:1 corpus:16 conceptual:1 summing:1 assumed:1 pen:1 helm:1 unable:1 why:5 reality:1 table:5 channel:2 learn:1 robust:1 obtaining:1 inventory:1 excellent:1 japanese:7 did:1 spread:1 big:2 noise:1 whole:1 profile:14 mahwah:1 nothing:2 edition:1 child:11 convey:3 body:2 referred:1 i:1 fashion:2 cooper:1 slow:1 predictability:1 ose:1 ny:1 theme:3 position:6 exceeding:1 explicit:1 house:2 rent:1 third:7 weighting:1 levy:2 learns:1 down:1 diagrammatic:1 formula:1 remained:1 specific:1 cynthia:1 list:2 evidence:2 disagreed:1 concern:1 sequential:1 importance:1 phd:1 demand:1 suited:1 entropy:22 simply:1 likely:2 unexpected:4 tracking:1 partially:1 lussier:1 applies:1 corresponds:4 determines:1 adaption:2 discourse:1 ma:2 russel:1 conditional:2 goal:2 endeavor:1 ite:1 month:1 eventual:1 appreciable:1 replace:1 man:2 content:5 experimentally:2 change:1 determined:2 acting:1 principal:1 called:1 total:8 uck:1 experimental:1 meaningful:2 indicating:1 formally:1 uneven:1 support:1 people:5 latter:1 arises:1 adelaide:3 proto:1 erson:3 |
3,408 | 4,086 | Linear Complementarity for Regularized Policy
Evaluation and Improvement
Jeff Johns
Christopher Painter-Wakefield
Department of Computer Science
Duke University
Durham, NC 27708
Ronald Parr
{johns, paint007, parr}@cs.duke.edu
Abstract
Recent work in reinforcement learning has emphasized the power of L1 regularization to perform feature selection and prevent overfitting. We propose formulating the L1 regularized linear fixed point problem as a linear complementarity problem (LCP). This formulation offers several advantages over the LARS-inspired
formulation, LARS-TD. The LCP formulation allows the use of efficient off-theshelf solvers, leads to a new uniqueness result, and can be initialized with starting
points from similar problems (warm starts). We demonstrate that warm starts, as
well as the efficiency of LCP solvers, can speed up policy iteration. Moreover,
warm starts permit a form of modified policy iteration that can be used to approximate a ?greedy? homotopy path, a generalization of the LARS-TD homotopy path
that combines policy evaluation and optimization.
1
Introduction
L1 regularization has become an important tool over the last decade with a wide variety of machine learning applications. In the context of linear regression, its use helps prevent overfitting and
enforces sparsity in the problem?s solution. Recent work has demonstrated how L1 regularization
can be applied to the value function approximation problem in Markov decision processes (MDPs).
Kolter and Ng [1] included L1 regularization within the least-squares temporal difference learning
[2] algorithm as LARS-TD, while Petrik et al. [3] adapted an approximate linear programming algorithm. In both cases, L1 regularization automates the important task of selecting relevant features,
thereby easing the design choices made by a practitioner.
LARS-TD provides a homotopy method for finding the L1 regularized linear fixed point formulated
by Kolter and Ng. We reformulate the L1 regularized linear fixed point as a linear complementarity
problem (LCP). This formulation offers several advantages. It allows us to draw upon the rich theory
of LCPs and optimized solvers to provide strong theoretical guarantees and fast performance. In
addition, we can take advantage of the ?warm start? capability of LCP solvers to produce algorithms
that are better suited to the sequential nature of policy improvement than LARS-TD, which must
start from scratch for each new policy.
2
Background
First, we introduce MDPs and linear value function approximation. We then review L1 regularization and feature selection for regression problems. Finally, we introduce LCPs. We defer discussion
of L1 regularization and feature selection for reinforcement learning (RL) until section 3.
1
2.1
MDP and Value Function Approximation Framework
We aim to discover optimal, or near-optimal, policies for Markov decision processes (MDPs) defined
by the quintuple M = (S, A, P, R, ?). Given a state s ? S, the probability of a transition to a state
s! ? S when action a ? A is taken is given by P (s! |s, a). The reward function is a mapping from
states to real numbers R : S "? R. A policy ? for M is a mapping from states to actions ? : s "? a
and the transition matrix induced by ? is denoted P ? . Future rewards are discounted by ? ? [0, 1).
The value function at state s for policy ? is the expected total ?-discounted reward for following ?
from s. In matrix-vector form, this is written:
V ? = T ? V ? = R + ?P ? V ? ,
?
where T is the Bellman operator for policy ? and V ? is the fixed point of this operator. An optimal
policy, ? ? , maximizes state values, has value function V ? , and is the fixed point of the T ? operator:
!
T ? V (s) = R(s) + ? max
P (s! |s, a)V (s! ).
a?A
s! ?S
?
Of the many algorithms that exist for finding ? , policy iteration is most relevant to the presentation
herein. For any policy ?j , policy iteration computes V ?j , then determines ?j+1 as the ?greedy?
policy with respect to V ?j :
!
?j+1 (s) = arg max[R(s) + ?
P (s! |s, a)V ?j (s! )].
a?A
s! ?S
This is repeated until some convergence condition is met. For an exact representation of each V ?j ,
the algorithm will converge to an optimal policy and the unique, optimal value function V ? .
The value function, transition model, and reward function are often too large to permit an exact representation. In such cases, an approximation architecture is used for the value function. A common
choice is V? = ?w, where w is a vector of k scalar weights and ? stores a set of k features in an n?k
matrix with one row per state. Since n is often intractably large, ? can be thought of as populated
by k linearly independent basis functions, ?1 . . . ?k , implicitly defining the columns of ?.
? which samples rows of ?,
For the purposes of estimating w, it is common to replace ? with ?,
though for conciseness of presentation we will use ? for both, since algorithms for estimating w are
? is substituted for ?. Typical linear function approximation algorithms [2]
essentially identical if ?
solve for the w which is a fixed point:
?w = ?(R + ??!? w) = ?T ? ?w,
where ? is the L2 projection into the span of ? and ?!? is P ? ? in the explicit case and composed
of sampled next features in the sampled case. Likewise, we overload T ? for the sampled case.
2.2
L1 Regularization and Feature Selection in Regression
In regression, the L1 regularized least squares problem is defined as:
1
w = arg min %?x ? y%22 + ?%x%1 ,
(1)
x?Rk 2
where y ? Rn is the target function and ? ? R?0 is a regularization parameter. This penalized
regression problem is equivalent to the Lasso [4], which minimizes the squared residual subject to a
constraint on %x%1 . The use of the L1 norm in the objective function prevents overfitting, but also
serves a secondary purpose of promoting sparse solutions (i.e., coefficients w containing many 0s).
Therefore, we can think of L1 regularization as performing feature selection. The Lasso?s objective
function is convex, ensuring the existence of a global (though not necessarily unique) minimum.
Even though the optimal solution to the Lasso can be computed in a fairly straightforward manner
using convex programming, this approach is not very efficient for large problems. This is a motivating factor for the least angle regression (LARS) algorithm [5], which can be thought of as a
homotopy method for solving the Lasso for all nonnegative values of ?. We do not repeat the details of the algorithm here, but point out that this is easier than it might sound at first because the
homotopy path in ?-space is piecewise linear (with finitely many segments). Furthermore, there
exists a closed form solution for moving from one piecewise linear segment to the next segment.
An important benefit of LARS is that it provides solutions for all values of ? in a single run of the
algorithm. Cross-validation can then be performed to select an appropriate value.
2
2.3
LCP and BLCP
Given a square matrix M and a vector q, a linear complementarity problem (LCP) seeks vectors
w ? 0 and z ? 0 with wT z = 0 and
w = q + M z.
The problem is thus parameterized by LCP(q, M ). Even though LCPs may appear to be simple
feasibility problems, the framework is rich enough to express any convex quadratic program.
The bounded linear complementarity problem (BLCP) [6] includes box constraints on z. The BLCP
computes w and z where w = q + M z and each variable zi meets one of the following conditions:
zi = u i
z i = li
l i < zi < u i
=?
=?
=?
wi ? 0
wi ? 0
wi = 0
(2a)
(2b)
(2c)
with bounds ?? ? li < ui ? ?. The parameterization is written BLCP(q, M, l, u). Notice that an
LCP is a special case of a BLCP with li = 0 and ui = ?, ?i. Like the LCP, the BLCP has a unique
solution when M is a P-matrix1 and there exist algorithms which are guaranteed to find this solution
[6, 7]. When the lower and upper bounds on the BLCP are finite, the BLCP can in fact be formulated
as an equivalent LCP of twice the dimensionality of the original problem. A full derivation of this
equivalence is shown in the appendix (supplementary materials).
There are many algorithms for solving (B)LCPs. Since our approach is not tied to a particular algorithm, we review some general properties of (B)LCP solvers. Optimized solvers can take advantage
of sparsity in z. A zero entry in z effectively cancels out a column in M . If M is large, efficient
solvers can avoid using M directly, instead using a smaller M ! that is induced by the nonzero entries
of z. The columns of M ! can be thought of as the ?active? columns and the procedure of swapping
columns in and out of M ! can be thought of as a pivoting operation, analogous to pivots in the simplex algorithm. Another important property of some (B)LCP algorithms is their ability to start from
an initial guess at the solution (i.e., a ?warm start?). If the initial guess is close to a solution, this can
significantly reduce the solver?s runtime.
Recently, Kim and Park [8] derived a connection between the BLCP and the Karush-Kuhn-Tucker
(KKT) conditions for LARS. In particular, they noted the solution to the minimization problem in
equation (1) has the form:
x = (?T ?)?1 ?T y + (?T ?)?1 (?c) ,
!"#$
!
"#
$ ! "# $ !"#$
w
q
M
z
where the vector ?c follows the constraints in equation (2) with li = ?? and ui = ?. Although we
describe the equivalence between the BLCP and LARS optimality conditions using M ? (?T ?)?1 ,
the inverse can take place inside the BLCP algorithm and this operation is feasible and efficient as
it is only done for the active columns of ?. Kim and Park [8] used a block pivoting algorithm,
originally introduced by J?udice and Pires [6], for solving the Lasso. Their experiments show the
block pivoting algorithm is significantly faster than both LARS and Feature Sign Search [9].
3
Previous Work
Recent work has emphasized feature selection as an important problem in reinforcement learning [10, 11]. Farahmand et al. [12] consider L2 regularized RL. An L1 regularized Bellman residual
minimization algorithm was proposed by Loth et al. [13]2 . Johns and Mahadevan [14] investigate
the combination of least squares temporal difference learning (LSTD) [2] with different variants
of the matching pursuit algorithm [15, 16]. Petrik et al. [3] consider L1 regularization in the context of approximate linear programming. Their approach offers some strong guarantees, but is not
well-suited to noisy, sampled data.
1
A P-matrix is a matrix for which all principal minors are positive.
Loth et al. claim to adapt LSTD to L1 regularization, but in fact describe a Bellman residual minimization
algorithm and not a fixed point calculation.
2
3
The work most directly related to our own is that of Kolter and Ng [1]. They propose augmenting
the LSTD algorithm with an L1 regularization penalty. This results in the following L1 regularized
linear fixed point (L1 TD) problem:
1
w = arg min !?x ? (R + ??"? w)!22 + ?!x!1 .
2
k
x?R
(3)
Kolter and Ng derive a set of necessary and sufficient conditions characterizing the above fixed
point3 in terms of ?, w, and a vector c of correlations between the features and the Bellman residual
T ? V? ? V? . More specifically, the correlation ci associated with feature ?i is given by:
ci = ?Ti (T ? V? ? V? ) = ?Ti (R + ??"? w ? ?w).
(4)
Introducing the notation I to denote the set of indices of active features in the model (i.e., I = {i :
wi #= 0}), the fixed point optimality conditions can be summarized as follows:
C1 . All features in the active set share the same absolute correlation, ?: ?i ? I, |ci | = ?.
C2 . Inactive features have less absolute correlation than active features: ?i ?
/ I, |ci | < ?.
C3 . Active features have correlations and weights agreeing in sign: ?i ? I, sgn(ci ) = sgn(wi ).
Kolter and Ng show that it is possible to find the fixed point using an iterative procedure adapted
from LARS. Their algorithm, LARS-TD, computes a sequence of fixed points, each of which satisfies the optimality conditions above for some intermediate L1 parameter ?? ? ?. Successive
solutions decrease ?? and are computed in closed form by determining the point at which a feature
must be added or removed in order to further decrease ?? without violating one of the fixed point
requirements. The algorithm (as applied to action-value function approximation) is a special case of
the algorithm presented in the appendix (see Fig. 2). Kolter and Ng prove that if ?T (? ? ??"? ) is
a P-matrix, then for any ? ? 0, LARS-TD will find a solution to equation (3).
LARS-TD inherits many of the benefits and limitations of LARS. The fact that it traces an entire
homotopy path can be quite helpful because it does not require committing to a particular value of
?. On the other hand, the incremental nature of LARS may not be the most efficient solution for any
single value of the regularization parameter, as shown by Lee et al. [9] and Kim and Park [8].
It is natural to employ LARS-TD in an iterative manner within the least squares policy iteration
(LSPI) algorithm [17], as Kolter and Ng did. In this usage, however, many of the benefits of LARS
are lost. When a new policy is selected in the policy iteration loop, LARS-TD must discard its
solution from the previous policy and start an entirely new homotopy path, making the value of the
homotopy path in this context not entirely clear. One might cross-validate a choice of regularization
parameter by measuring the performance of the final policy, but this requires guessing a value of ?
for all policies and then running LARS-TD up to this value for each policy. If a new value of ? is
tried, all of the work done for the previous value must be discarded.
4
The L1 Regularized Fixed Point as an LCP
We show that the optimality conditions for the L1 TD fixed point correspond to the solution of a
(B)LCP. This reformulation allows for (1) new algorithms to compute the fixed point using (B)LCP
solvers, and (2) a new guarantee on the uniqueness of a fixed point.
The L1 regularized linear fixed point is described by a vector of correlations c as defined in equation
(4). We introduce the following variables:
A = ?T (? ? ??"? )
b = ?T R,
3
For fixed w, the RHS of equation (3) is a convex optimization problem; a sufficient condition for optimality
of some vector x? is that the zero vector is in the subdifferential of the RHS at x? . The fixed point conditions
follow from the equality between the LHS and RHS.
4
that allow equation (4) to be simplified as c = b ? Aw. Assuming A is a P-matrix, A is invertible4 [18] and we can write:
?1
?1
w =A
.
!"#$
! "# $b + A
!"#$ (?c)
!"#$
w
q
M
z
Consider a solution (w and z) to the equation above where z is bounded as in equation (2) with
l = ?? and u = ? to specify a BLCP. It is easy to verify that coefficients w satisfying this BLCP
acheive the L1 TD optimality conditions as detailed in section 3. Thus, any appropriate solver for
the BLCP(A?1 b, A?1 , ??, ?) can be thought of as a linear complementarity approach to solving
for the L1 TD fixed point. We refer to this class of solvers as LC-TD algorithms and parameterize
them as LC-TD(?, ?"? , R, ?, ?).
Proposition 1 If A is a P-matrix, then for any R, the L1 regularized linear fixed point exists, is
unique, and will be found by a basic-set BLCP algorithm solving BLCP(A?1 b, A?1 , ??, ?).
This proposition follows immediately from some basic BLCP results. We note that if A is a Pmatrix, so is A?1 [18], that BLCPs for P-matrices have a unique solution for any q ([7], Chp. 3),
and that the the basic-set algorithm of J?udice and Pires [19] is guaranteed to find a solution to any
BLCP with a P-matrix. This strengthens the theorem by Kolter and Ng [1], which guaranteed only
that the LARS-TD algorithm would converge to a solution when A is a P-matrix.
This connection to the LCP literature has practical benefits as well as theoretical ones. Decoupling
the problem from the solver allows a variety of algorithms to be exploited. For example, the ability
of many solvers to use a warm start during initialization offers a significant computational advantage
over LARS-TD (which always begins with a null solution). In the experimental section of this paper,
we demonstrate that the ability to use warm starts during policy iteration can significantly improve
computational efficiency. We also find that (B)LCP solvers can be more robust than LARS-TD, an
issue we address further in the appendix.
5
Modified Policy Iteration using LARS-TD and LC-TD
As mentioned in section 3, the advantages of LARS-TD as a homotopy method are less clear when
it is used in a policy iteration loop since the homotopy path is traced only for specific policies. It is
possible to incorporate greedy policy improvements into the LARS-TD loop, leading to a homotopy
path for greedy policies. The greedy L1 regularized fixed point equation is:
1
w = arg min "?x ? max(R + ??"? w)"22 + ?"x"1 .
?
x?Rk 2
(5)
We propose a modification to LARS-TD called LARQ which, along with conditions C1 -C3 in section 3, maintains an additional invariant:
C4 . The current policy ? is greedy with respect to the current solution.
It turns out that we can change policies and avoid violating the LARS-TD invariants if we make
policy changes at points where applying the Bellman operator yields the same value for both the
!
old policy (?) and the new policy (? " ): T ? V? = T ? V? . The LARS-TD invariants all depend on
the correlation of features with the residual T ? V? ? V? of the current solution. When the above
equation is satisfied, the residual is equal for both policies. Thus, we can change policies at such
points without violating any of the LARS-TD invariants. Due to space limitations, we defer a full
presentation of the LARQ algorithm to the appendix.
When run to completion, LARQ provides a set of action-values that are the greedy fixed point for
all settings of ?. In principle, this is more flexible than LARS-TD with policy iteration because it
produces these results in a single run of the algorithm. In practice, LARQ suffers two limitations.
4
Even when A is not invertible, we can still use a BLCP solver as long as the principal submatrix of A
associated with the active features is invertible. As with LARS-TD, the inverse only occurs for this principal
submatrix. In fact, we discuss in the appendix how one need never explicitly compute A. Alternatively, we can
convert the BLCP to an LCP (appendix A.1) thereby avoiding A?1 in the parameterization of the problem.
5
The first is that it can be slow. LARS-TD enumerates every point at which the active set of features
might change, a calculation that must be redone every time the active set changes. LARQ must
do this as well, but it must also enumerate all points at which the greedy policy can change. For k
features and n samples, LARS-TD must check O(k) points, but LARQ must check O(k + n) points.
Even though LARS-TD will run multiple times within a policy iteration loop, the number of such
iterations will typically be far fewer than the number of training data points. In practice, we have
observed that LARQ runs several times slower than LARS-TD with policy iteration.
A second limitation of LARQ is that it can get ?stuck.? This occurs when the greedy policy for a
particular ? is not well defined. In such cases, the algorithm attempts to switch to a new policy
immediately following a policy change. This problem is not unique to LARQ. Looping is possible
with most approximate policy iteration algorithms. What makes it particularly troublesome for
LARQ is that there are few satisfying ways of addressing this issue without sacrificing the invariants.
To address these limitations, we present a compromise between LARQ and LARS-TD with policy
iteration. The algorithm, LC-MPI, is presented as Algorithm 1. It avoids the cost of continually
checking for policy changes by updating the policy only at a fixed set of values, ? (1) . . . ? (m) . Note
that the ? values are in decreasing order with ? (1) set to the maximum value (i.e., the point such
that w(1) is the zero vector). At each ? (j) , the algorithm uses a policy iteration loop to (1) determine
the current policy (greedy with respect to parameters w
? (j) ), and (2) compute an approximate value
(j)
function ?w using LC-TD. The policy iteration loop terminates when w(j) ? w
? (j) or some
predefined number of iterations is exceeded. This use of LC-TD within a policy iteration loop will
typically be quite fast because we can use the current feature set as a warm start. The warm start is
indicated in Algorithm 1 by supp(w
? (j) ), where the function supp determines the support, or active
(j)
elements, in w
? ; many (B)LCP solvers can use this information for initialization.
Once the policy iteration loop terminates for point ? (j) , LC-MPI simply begins at the next point
? (j+1) by initializing the weights with the previous solution, w
? (j+1) ? w(j) . This was found
to be a very effective technique. As an alternative, we tested initializing w
? (j+1) with the result of
(j)
running LARS-TD with the greedy policy implicit in w from the point (? (j) , w(j) ) to ? (j+1) . This
initialization method performed worse experimentally than the simple approach described above.
We can view LC-MPI as approximating LARQ?s homotopy path since the two algorithms agree for
any ? (j) reachable by LARQ. However, LC-MPI is more efficient and avoids the problem of getting
stuck. By compromising between the greedy updates of LARQ and the pure policy evaluation
methods of LARS-TD and LC-TD, LC-MPI can be thought of as form of modified policy iteration
[20]. The following table summarizes the properties of the algorithms described in this paper.
Warm start for each new ?
Warm start for each new policy
Greedy policy homotopy path
Robust to policy cycles
6
LARS-TD Policy Iteration
N
N
N
Y
LC-TD Policy Iteration
N
Y
N
Y
LARQ
Y
Y
Y
N
LC-MPI
Y
Y
Approximate
Y
Experiments
We performed two types of experiments to highlight the potential benefits of (B)LCP algorithms.
First, we used both LARS-TD and LC-TD within policy iteration. These experiments, which were
run using a single value of the L1 regularization parameter, show the benefit of warm starts for
LC-TD. The second set of experiments demonstrates the benefit of using the LC-MPI algorithm. A
single run of LC-MPI results in greedy policies for multiple values of ?, allowing the use of crossvalidation to pick the best policy. We show this is significantly more efficient than running policy
iteration with either LARS-TD or LC-TD multiple times for different values of ?. We discuss the
details of the specific LCP solver we used in the appendix.
Both types of experiments were conducted on the 20-state chain [17] and mountain car [21] domains,
the same problems tested by Kolter and Ng [1]. The chain MDP consists of two stochastic actions,
left and right, a reward of one at each end of the chain, and ? = 0.9. One thousand samples were
generated using 100 episodes, each consisting of 10 random steps. For features, we used 1000
Gaussian random noise features along with five equally spaced radial basis functions (RBFs) and
a constant function. The goal in the mountain car MDP is to drive an underpowered car up a hill
6
Algorithm 1 LC-MPI
Inputs:
{si , ai , ri , s!i }n
i=1 , state transition and reward samples
? : S ? A ? Rk , state-action features
? ? [0, 1), discount factor
?P
? (j)
(1)
?
{? (j) }m
= maxl ? n
< ? (j?1) for j ? {2, . . . , m}, and ? (m) ? 0
j=1 , where ?
i=1 ?l (si , ai )ri , ?
$ ? R+ and T ? N, termination conditions for policy iteration
Initialization:
? ? [?(s1 , a1 ) . . . ?(sn , an )]T ,
R ? [r1 . . . rn ]T ,
w(1) ? 0
for j = 2 to m do
// Initialize with the previous solution
w
? (j) ? w(j?1)
// Policy iteration loop
Loop:
// Select greedy actions and form ?!
?i : a!i ? arg maxa ?(s!i , a)T w
? (i)
T
!
!
!
!
!
? ? [?(s1 , a1 ) . . . ?(sn , an )]
// Solve the LC-TD problem using a (B)LCP solver with a warm start
w(j) ? LC-TD(?, ?! , R, ?, ? (j) ) with warm start supp(w
? (j) )
// Check for termination
if ('w(j) ? w
? (j) '2 ? $) or (# iterations ? T )
then break loop
else w
? (j) ? w(j)
Return {w(j) }m
j=1
by building up momentum. The domain is continuous, two dimensional, and has three actions. We
used ? = 0.99 and 155 radial basis functions (apportioned as a two dimensional grid of 1, 2, 3, 4, 5,
6, and 8 RBFs) and one constant function for features. Samples were generated using 75 episodes
where each episode started in a random start state, took random actions, and lasted at most 20 steps.
6.1
Policy Iteration
To compare LARS-TD and LC-TD when employed within policy iteration, we recorded the number
of steps used during each round of policy iteration, where a step corresponds to a change in the active
feature set. The computational complexity per step of each algorithm is similar; therefore, we used
the average number of steps per policy as a metric for comparing the algorithms. Policy iteration
was run either until the solution converged or 15 rounds were exceeded. This process was repeated
10 times for 11 different values of ?. We present the results from these experiments in the first two
columns of Table 1. The two algorithms performed similarly for the chain MDP, but LC-TD used
significantly fewer steps for the mountain car MDP. Figure 1 shows plots for the number of steps
used for each round of policy iteration for a single (typical) trial. Notice the declining trend for
LC-TD; this is due to the warm starts requiring fewer steps to find a solution. The plot for the chain
MDP shows that LC-TD uses many more steps in the first round of policy iteration than does LARSTD. Lastly, in the trials shown in Figure 1, policy iteration using LC-TD converged in six iterations
whereas it did not converge at all when using LARS-TD. This was due to LARS-TD producing
solutions that violate the L1 TD optimality conditions. We discuss this in detail in appendix A.5.
6.2
LC-MPI
When LARS-TD and LC-TD are used as subroutines within policy iteration, the process ends at a
single value of the L1 regularization parameter ?. The policy iteration loop must be rerun to consider
different values of ?. In this section, we show how much computation can be saved by running
LC-MPI once (to produce m greedy policies, each at a different value of ?) versus running policy
iteration m separate times. The third column in Table 1 shows the average number of algorithm steps
per policy for LC-MPI. As expected, there is a significant reduction in complexity by using LC-MPI
for both domains. In the appendix, we give a more detailed example of how cross-validation can be
7
300
250
200
LARS?TD
200
Number of Steps
Number of Steps
250
LC?TD
150
100
100
LARS?TD
50
50
0
0
150
5
10
Round of Policy Iteration
LC?TD
0
0
15
(a) Chain
5
10
Round of Policy Iteration
15
(b) Mountain car
Figure 1: Number of steps used by algorithms LARS-TD and LC-TD during each round of policy
iteration for a typical trial. For LC-TD, note the decrease in steps due to warm starts.
Domain
Chain
Mountain car
LARS-TD, PI
73 ? 13
214 ? 33
LC-TD, PI
77 ? 11
116 ? 22
LC-MPI
24 ? 11
21 ? 5
Table 1: Average number of algorithm steps per policy.
used to select a good value of the regularization parameter. We also offer some additional comments
on the robustness of the LARS-TD algorithm.
7
Conclusions
In this paper, we proposed formulating the L1 regularized linear fixed point problem as a linear
complementarity problem. We showed the LCP formulation leads to a stronger theoretical guarantee
in terms of the solution?s uniqueness than was previously shown. Furthermore, we demonstrated that
the ?warm start? ability of LCP solvers can accelerate the computation of the L1 TD fixed point when
initialized with the support set of a related problem. This was found to be particularly effective for
policy iteration problems when the set of active features does not change significantly from one
policy to the next.
We proposed the LARQ algorithm as an alternative to LARS-TD. The difference between these
algorithms is that LARQ incorporates greedy policy improvements inside the homotopy path. The
advantage of this ?greedy? homotopy path is that it provides a set of action-values that are a greedy
fixed point for all settings of the L1 regularization parameter. However, this additional flexibility
comes with increased computational complexity. As a compromise between LARS-TD and LARQ,
we proposed the LC-MPI algorithm which only maintains the LARQ invariants at a fixed set of
values. The key to making LC-MPI efficient is the use of warm starts by using an LCP algorithm.
There are several directions for future work. An interesting question is whether there is a natural
way to incorporate policy improvement directly within the LCP formulation. Another concern for
L1 TD algorithms is a better characterization of the conditions under which solutions exist and can
be found efficiently. In previous work, Kolter and Ng [1] indicated the P-matrix property can always
hold provided enough L2 regularization is added to the problem. While this is possible, it also
decreases the sparsity of the solution; therefore, it would be useful to find other techniques for
guaranteeing convergence while maintaining sparsity.
Acknowledgments
This work was supported by the National Science Foundation (NSF) under Grant #0937060 to the
Computing Research Association for the CIFellows Project, NSF Grant IIS-0713435, and DARPA
CSSG HR0011-06-1-0027. Any opinions, findings, and conclusions or recommendations expressed
in this material are those of the authors and do not necessarily reflect the views of the National
Science Foundation or the Computing Research Association.
8
References
[1] J. Kolter and A. Ng. Regularization and feature selection in least-squares temporal difference
learning. In Proc. ICML, pages 521?528, 2009.
[2] S. Bradtke and A. Barto. Linear least-squares algorithms for temporal difference learning.
Machine Learning, 22(1-3):33?57, 1996.
[3] M. Petrik, G. Taylor, R. Parr, and S. Zilberstein. Feature selection using regularization in
approximate linear programs for Markov decision processes. In To appear in Proc. ICML,
2010.
[4] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society. Series B (Methodological), 58(1):267?288, 1996.
[5] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. The Annals of
Statistics, 32(2):407?451, 2004.
[6] J. J?udice and F. Pires. A block principal pivoting algorithm for large-scale strictly monotone
linear complementarity problems. Computers and Operations Research, 21(5):587?596, 1994.
[7] K. Murty. Linear Complementarity, Linear and Nonlinear Programming. Heldermann Verlag,
1988.
[8] J. Kim and H. Park. Fast active-set-type algorithms for L1 -regularized linear regression. In
Proc. AISTAT, pages 397?404, 2010.
[9] H. Lee, A. Battle, R. Raina, and A. Ng. Efficient sparse coding algorithms. In Advances in
Neural Information Processing Systems 19, pages 801?808, 2007.
[10] S. Mahadevan and M. Maggioni. Proto-value functions: A Laplacian framework for learning
representation and control in Markov decision processes. JMLR, 8:2169?2231, 2007.
[11] R. Parr, L. Li, G. Taylor, C. Painter-Wakefield, and M. Littman. An analysis of linear models,
linear value-function approximation, and feature selection for reinforcement learning. In Proc.
ICML, 2008.
[12] A. Farahmand, M. Ghavamzadeh, C. Szepesv?ari, and S. Mannor. Regularized fitted Q-iteration
for planning in continuous-space Markovian decision problems. In Proc. ACC. IEEE Press,
2009.
[13] M. Loth, M. Davy, and P. Preux. Sparse temporal difference learning using LASSO. In IEEE
International Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007.
[14] J. Johns and S. Mahadevan. Sparse approximate policy evaluation using graph-based basis
functions. Technical Report UM-CS-2009-041, University of Massachusetts Amherst, Department of Computer Science, 2009.
[15] S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397?3415, 1993.
[16] Y. Pati, R. Rezaiifar, and P. Krishnaprasad. Orthogonal matching pursuit: Recursive function
approximation with applications to wavelet decomposition. In Proceedings of the 27th Annual
Asilomar Conference on Signals, Systems, and Computers, volume 1, pages 40?44, 1993.
[17] M. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning
Research, 4:1107?1149, 2003.
[18] S. Lee and H. Seol. A survey on the matrix completion problem. Trends in Mathematics,
4(1):38?43, 2001.
[19] J. J?udice and F. Pires. Basic-set algorithm for a generalized linear complementarity problem.
Journal of Optimization Theory and Applications, 74(3):391?411, 1992.
[20] M. Puterman and M. Shin. Modified policy iteration algorithms for discounted Markov decision problems. Management Science, 24(11), 1978.
[21] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
9
| 4086 |@word trial:3 norm:1 stronger:1 termination:2 seek:1 tried:1 decomposition:1 pick:1 thereby:2 reduction:1 initial:2 series:1 selecting:1 current:5 comparing:1 si:2 must:10 written:2 john:4 ronald:1 plot:2 update:1 greedy:19 selected:1 guess:2 fewer:3 parameterization:2 provides:4 matrix1:1 redone:1 characterization:1 successive:1 mannor:1 zhang:1 five:1 along:2 c2:1 become:1 symposium:1 farahmand:2 prove:1 consists:1 combine:1 inside:2 manner:2 introduce:3 expected:2 planning:1 bellman:5 inspired:1 discounted:3 decreasing:1 td:74 solver:19 provided:1 begin:2 discover:1 moreover:1 estimating:2 maximizes:1 bounded:2 notation:1 null:1 what:1 mountain:5 minimizes:1 maxa:1 finding:3 guarantee:4 temporal:5 every:2 ti:2 runtime:1 um:1 demonstrates:1 control:1 grant:2 appear:2 producing:1 continually:1 positive:1 sutton:1 troublesome:1 meet:1 path:12 might:3 easing:1 twice:1 initialization:4 equivalence:2 unique:6 practical:1 enforces:1 acknowledgment:1 lost:1 block:3 practice:2 recursive:1 procedure:2 shin:1 thought:6 significantly:6 projection:1 matching:3 chp:1 radial:2 murty:1 davy:1 get:1 close:1 selection:10 operator:4 context:3 applying:1 equivalent:2 demonstrated:2 straightforward:1 starting:1 convex:4 survey:1 immediately:2 pure:1 maggioni:1 analogous:1 annals:1 target:1 mallat:1 exact:2 duke:2 programming:5 us:2 complementarity:10 element:1 trend:2 satisfying:2 strengthens:1 particularly:2 updating:1 observed:1 initializing:2 parameterize:1 thousand:1 cycle:1 episode:3 apportioned:1 decrease:4 removed:1 mentioned:1 ui:3 complexity:3 reward:6 littman:1 automates:1 dynamic:1 ghavamzadeh:1 depend:1 solving:5 segment:3 compromise:2 petrik:3 upon:1 efficiency:2 basis:4 accelerate:1 darpa:1 derivation:1 fast:3 describe:2 committing:1 effective:2 quite:2 supplementary:1 solve:2 ability:4 statistic:1 think:1 noisy:1 final:1 advantage:7 sequence:1 took:1 propose:3 relevant:2 loop:12 flexibility:1 validate:1 aistat:1 getting:1 crossvalidation:1 convergence:2 requirement:1 r1:1 produce:3 incremental:1 guaranteeing:1 help:1 derive:1 completion:2 augmenting:1 finitely:1 minor:1 strong:2 c:2 come:1 met:1 kuhn:1 direction:1 saved:1 compromising:1 lars:54 stochastic:1 sgn:2 opinion:1 material:2 require:1 generalization:1 karush:1 homotopy:15 pmatrix:1 proposition:2 strictly:1 hold:1 mapping:2 claim:1 parr:5 rezaiifar:1 dictionary:1 theshelf:1 purpose:2 uniqueness:3 proc:5 tool:1 minimization:3 mit:1 gaussian:1 always:2 aim:1 modified:4 avoid:2 shrinkage:1 barto:2 zilberstein:1 derived:1 inherits:1 improvement:5 methodological:1 check:3 lasted:1 kim:4 helpful:1 entire:1 typically:2 subroutine:1 rerun:1 arg:5 issue:2 flexible:1 krishnaprasad:1 denoted:1 special:2 fairly:1 initialize:1 equal:1 once:2 never:1 ng:12 identical:1 park:4 cancel:1 icml:3 future:2 simplex:1 report:1 piecewise:2 employ:1 few:1 composed:1 national:2 loth:3 consisting:1 attempt:1 investigate:1 evaluation:4 swapping:1 chain:7 predefined:1 necessary:1 lh:1 orthogonal:1 old:1 taylor:2 initialized:2 sacrificing:1 theoretical:3 fitted:1 increased:1 column:8 markovian:1 measuring:1 cost:1 introducing:1 addressing:1 entry:2 conducted:1 too:1 motivating:1 underpowered:1 aw:1 international:1 amherst:1 lee:3 off:1 invertible:2 squared:1 reflect:1 satisfied:1 management:1 recorded:1 containing:1 worse:1 leading:1 return:1 li:5 supp:3 potential:1 summarized:1 coding:1 includes:1 coefficient:2 kolter:11 explicitly:1 performed:4 view:2 break:1 closed:2 start:22 maintains:2 capability:1 defer:2 rbfs:2 painter:2 square:8 likewise:1 efficiently:1 correspond:1 yield:1 spaced:1 cifellows:1 drive:1 converged:2 acc:1 suffers:1 frequency:1 tucker:1 associated:2 conciseness:1 sampled:4 massachusetts:1 enumerates:1 car:6 dimensionality:1 efron:1 exceeded:2 originally:1 violating:3 follow:1 specify:1 formulation:6 done:2 though:5 box:1 furthermore:2 wakefield:2 implicit:1 lastly:1 until:3 correlation:7 hand:1 christopher:1 nonlinear:1 indicated:2 mdp:6 usage:1 building:1 verify:1 requiring:1 regularization:22 equality:1 nonzero:1 puterman:1 round:7 during:4 noted:1 mpi:16 generalized:1 hill:1 demonstrate:2 lcp:27 l1:36 bradtke:1 recently:1 ari:1 pivoting:4 common:2 lagoudakis:1 rl:2 volume:1 association:2 refer:1 significant:2 declining:1 ai:2 grid:1 populated:1 similarly:1 mathematics:1 reachable:1 moving:1 own:1 recent:3 showed:1 discard:1 store:1 verlag:1 exploited:1 minimum:1 additional:3 employed:1 converge:3 determine:1 signal:2 ii:1 full:2 sound:1 multiple:3 violate:1 technical:1 faster:1 adapt:1 calculation:2 offer:5 cross:3 long:1 equally:1 a1:2 feasibility:1 ensuring:1 laplacian:1 variant:1 regression:9 basic:4 essentially:1 metric:1 iteration:46 c1:2 addition:1 background:1 subdifferential:1 whereas:1 szepesv:1 else:1 comment:1 induced:2 subject:1 acheive:1 incorporates:1 practitioner:1 near:1 intermediate:1 mahadevan:3 enough:2 easy:1 variety:2 switch:1 zi:3 architecture:1 lasso:7 hastie:1 reduce:1 pivot:1 inactive:1 whether:1 six:1 penalty:1 larstd:1 action:10 enumerate:1 useful:1 clear:2 detailed:2 discount:1 udice:4 exist:3 nsf:2 notice:2 sign:2 per:5 tibshirani:2 write:1 express:1 key:1 reformulation:1 traced:1 prevent:2 graph:1 monotone:1 convert:1 run:8 angle:2 parameterized:1 inverse:2 place:1 draw:1 decision:6 appendix:9 summarizes:1 submatrix:2 entirely:2 bound:2 guaranteed:3 quadratic:1 nonnegative:1 annual:1 adapted:2 constraint:3 looping:1 ri:2 speed:1 span:1 formulating:2 min:3 quintuple:1 performing:1 optimality:7 department:2 project:1 combination:1 battle:1 smaller:1 terminates:2 agreeing:1 wi:5 making:2 modification:1 s1:2 invariant:6 taken:1 asilomar:1 equation:10 agree:1 previously:1 turn:1 discus:3 serf:1 end:2 pursuit:3 operation:3 permit:2 promoting:1 appropriate:2 alternative:2 robustness:1 slower:1 existence:1 original:1 running:5 maintaining:1 approximating:1 society:1 lspi:1 objective:2 added:2 question:1 occurs:2 guessing:1 separate:1 assuming:1 index:1 pati:1 reformulate:1 nc:1 trace:1 design:1 policy:90 perform:1 allowing:1 upper:1 markov:5 discarded:1 finite:1 defining:1 rn:2 introduced:1 c3:2 optimized:2 connection:2 c4:1 herein:1 pires:4 address:2 hr0011:1 sparsity:4 program:2 preux:1 max:3 royal:1 power:1 natural:2 warm:18 regularized:15 residual:6 raina:1 improve:1 mdps:3 started:1 sn:2 review:2 literature:1 l2:3 checking:1 determining:1 highlight:1 interesting:1 limitation:5 versus:1 validation:2 foundation:2 sufficient:2 principle:1 share:1 pi:2 row:2 penalized:1 repeat:1 last:1 supported:1 intractably:1 allow:1 johnstone:1 wide:1 characterizing:1 absolute:2 sparse:4 benefit:7 maxl:1 transition:4 avoids:2 rich:2 computes:3 stuck:2 made:1 reinforcement:6 author:1 simplified:1 far:1 transaction:1 approximate:9 implicitly:1 global:1 overfitting:3 active:13 kkt:1 alternatively:1 search:1 iterative:2 continuous:2 decade:1 table:4 scratch:1 nature:2 robust:2 decoupling:1 cssg:1 necessarily:2 domain:4 substituted:1 did:2 linearly:1 rh:3 noise:1 repeated:2 fig:1 slow:1 lc:39 momentum:1 explicit:1 tied:1 jmlr:1 third:1 wavelet:1 rk:3 theorem:1 specific:2 emphasized:2 concern:1 exists:2 sequential:1 effectively:1 ci:5 heldermann:1 durham:1 easier:1 suited:2 simply:1 prevents:1 expressed:1 scalar:1 recommendation:1 lstd:3 corresponds:1 determines:2 satisfies:1 goal:1 formulated:2 presentation:3 jeff:1 replace:1 feasible:1 change:10 experimentally:1 included:1 typical:3 specifically:1 wt:1 principal:4 total:1 called:1 secondary:1 experimental:1 select:3 support:2 overload:1 incorporate:2 proto:1 tested:2 avoiding:1 |
3,409 | 4,087 | Extended Bayesian Information Criteria for Gaussian
Graphical Models
Mathias Drton
University of Chicago
[email protected]
Rina Foygel
University of Chicago
[email protected]
Abstract
Gaussian graphical models with sparsity in the inverse covariance matrix are of
significant interest in many modern applications. For the problem of recovering
the graphical structure, information criteria provide useful optimization objectives
for algorithms searching through sets of graphs or for selection of tuning parameters of other methods such as the graphical lasso, which is a likelihood penalization technique. In this paper we establish the consistency of an extended Bayesian
information criterion for Gaussian graphical models in a scenario where both the
number of variables p and the sample size n grow. Compared to earlier work on
the regression case, our treatment allows for growth in the number of non-zero parameters in the true model, which is necessary in order to cover connected graphs.
We demonstrate the performance of this criterion on simulated data when used in
conjunction with the graphical lasso, and verify that the criterion indeed performs
better than either cross-validation or the ordinary Bayesian information criterion
when p and the number of non-zero parameters q both scale with n.
1
Introduction
This paper is concerned with the problem of model selection (or structure learning) in Gaussian
graphical modelling. A Gaussian graphical model for a random vector X = (X1 , . . . , Xp ) is determined by a graph G on p nodes. The model comprises all multivariate normal distributions
N (?, ??1 ) whose inverse covariance matrix satisfies that ?jk = 0 when {j, k} is not an edge in G.
For background on these models, including a discussion of the conditional independence interpretation of the graph, we refer the reader to [1].
In many applications, in particular in the analysis of gene expression data, inference of the graph G is
of significant interest. Information criteria provide an important tool for this problem. They provide
the objective to be minimized in (heuristic) searches over the space of graphs and are sometimes
used to select tuning parameters in other methods such as the graphical lasso of [2]. In this work
we study an extended Bayesian information criterion (BIC) for Gaussian graphical models. Given a
sample of n independent and identically distributed observations, this criterion takes the form
?
BIC? (E) = ?2ln (?(E))
+ |E| log n + 4|E|? log p,
(1)
?
where E is the edge set of a candidate graph and ln (?(E))
denotes the maximized log-likelihood
function of the associated model. (In this context an edge set comprises unordered pairs {j, k} of
distinct elements in {1, . . . , p}.) The criterion is indexed by a parameter ? ? [0, 1]; see the Bayesian
interpretation of ? given in [3]. If ? = 0, then the classical BIC of [4] is recovered, which is
well known to lead to (asymptotically) consistent model selection in the setting of fixed number of
variables p and growing sample size n. Consistency is understood to mean selection of the smallest
true graph whose edge set we denote E0 . Positive ? leads to stronger penalization of large graphs
and our main result states that the (asymptotic) consistency of an exhaustive search over a restricted
1
model space may then also hold in a scenario where p grows moderately with n (see the Main
Theorem in Section 2). Our numerical work demonstrates that positive values of ? indeed lead to
improved graph inference when p and n are of comparable size (Section 3).
The choice of the criterion in (1) is in analogy to a similar criterion for regression models that was
first proposed in [5] and theoretically studied in [3, 6]. Our theoretical study employs ideas from
these latter two papers as well as distribution theory available for decomposable graphical models.
As mentioned above, we treat an exhaustive search over a restricted model space that contains all
decomposable models given by an edge set of cardinality |E| ? q. One difference to the regression
treatment of [3, 6] is that we do not fix the dimension bound q nor the dimension |E0 | of the smallest
true model. This is necessary for connected graphs to be covered by our work.
In practice, an exhaustive search is infeasible even for moderate values of p and q. Therefore, we
must choose some method for preselecting a smaller set of models, each of which is then scored
by applying the extended BIC (EBIC). Our simulations show that the combination of EBIC and
graphical lasso gives good results well beyond the realm of the assumptions made in our theoretical
analysis. This combination is consistent in settings where both the lasso and the exhaustive search
are consistent but in light of the good theoretical properties of lasso procedures (see [7]), studying
this particular combination in itself would be an interesting topic for future work.
2
2.1
Consistency of the extended BIC for Gaussian graphical models
Notation and definitions
In the sequel we make no distinction between the edge set E of a graph on p nodes and the associated Gaussian graphical model. Without loss of generality we assume a zero mean vector for all
distributions in the model. We also refer to E as a set of entries in a p ? p matrix, meaning the 2|E|
entries indexed by (j, k) and (k, j) for each {j, k} ? E. We use ? to denote the index pairs (j, j)
for the diagonal entries of the matrix.
Let ?0 be a positive definite matrix supported on ? ? E0 . In other words, the non-zero entries
of ?0 are precisely the diagonal entries as well as the off-diagonal positions indexed by E0 ; note
that a single edge in E0 corresponds to two positions in the matrix due to symmetry. Suppose the
?1
random vectors
P X1 , . .T. , Xn are independent and distributed identically according to N (0, ?0 ).
1
Let S = n i Xi Xi be the sample covariance matrix. The Gaussian log-likelihood function
simplifies to
n
(2)
ln (?) = [log det(?) ? trace(S?)] .
2
We introduce some further notation. First, we define the maximum variance of the individual nodes:
2
?max
= max(??1
0 )jj .
j
Next, we define ?0 = mine?E0 |(?0 )e |, the minimum signal over the edges present in the graph.
(For edge e = {j, k}, let (?0 )e = (?0 )jk = (?0 )kj .) Finally, we write ?max for the maximum
2
eigenvalue of ?0 . Observe that the product ?max
?max is no larger than the condition number of ?0
?1
2
because 1/?min (?0 ) = ?max (?0 ) ? ?max .
2.2
Main result
Suppose that n tends to infinity with the following asymptotic assumptions on data and model:
?
E0 is decomposable, with |E0 | ? q,
?
?
? ? 2 ?max ? C,
?
? max
p = O(n? ), p ? ?,
(3)
1
?
) > 0,
?0 = ? ? (1 ? 4?
?
?
?
? (p + 2q) log p ? ?2max = o(n)
?2
0
Here C, ? > 0 and ? are fixed reals, while the integers p, q, the edge set E0 , the matrix ?0 , and
2
thus the quantities ?max
, ?max and ?0 are implicitly allowed to vary with n. We suppress this latter
dependence on n in the notation. The ?big oh? O(?) and the ?small oh? o(?) are the Landau symbols.
2
Main Theorem. Suppose that conditions (3) hold. Let E be the set of all decomposable models E
with |E| ? q. Then with probability tending to 1 as n ? ?,
E0 = arg min BIC? (E).
E?E
That is, the extended BIC with parameter ? selects the smallest true model E0 when applied to any
subset of E containing E0 .
In order to prove this theorem we use two techniques for comparing likelihoods of different models. Firstly, in Chen and Chen?s work on the GLM case [6], the Taylor approximation to the loglikelihood function is used and we will proceed similarly when comparing the smallest true model
E0 to models E which do not contain E0 . The technique produces a lower bound on the decrease in
likelihood when the true model is replaced by a false model.
Theorem 1. Suppose that conditions (3) hold. Let E1 be the set of models E with E 6? E0 and
|E| ? q. Then with probability tending to 1 as n ? ?,
?
ln (?0 ) ? ln (?(E))
> 2q(log p)(1 + ?0 ) ? E ? E1 .
Secondly, Porteous [8] shows that in the case of two nested models which are both decomposable,
the likelihood ratio (at the maximum likelihood estimates) follows a distribution that can be expressed exactly as a log product of Beta distributions. We will use this to address the comparison
between the model E0 and decomposable models E containing E0 and obtain an upper bound on
the improvement in likelihood when the true model is expanded to a larger decomposable model.
Theorem 2. Suppose that conditions (3) hold. Let E0 be the set of decomposable models E with
E ? E0 and |E| ? q. Then with probability tending to 1 as n ? ?,
?
? 0 )) < 2(1 + ?0 )(|E| ? |E0 |) log p
ln (?(E))
? ln (?(E
?E ? E0 \{E0 }.
Proof of the Main Theorem. With probability tending to 1 as n ? ?, both of the conclusions of
Theorems 1 and 2 hold. We will show that both conclusions holding simultaneously implies the
desired result.
Observe that E ? E0 ? E1 . Choose any E ? E\{E0 }. If E ? E0 , then (by Theorem 2):
?
? 0 ))) + 4(1 + ?0 )(|E| ? |E0 |) log p > 0.
BIC? (E) ? BIC? (E0 ) = ?2(ln (?(E))
? ln (?(E
If instead E ? E1 , then (by Theorem 1, since |E0 | ? q):
?
? 0 ))) + 4(1 + ?0 )(|E| ? |E0 |) log p > 0.
BIC? (E) ? BIC? (E0 ) = ?2(ln (?(E))
? ln (?(E
Therefore, for any E ? E\{E0 }, BIC? (E) > BIC? (E0 ), which yields the desired result.
Some details on the proofs of Theorems 1 and 2 are given in the Appendix in Section 5.
3
Simulations
In this section, we demonstrate that the EBIC with positive ? indeed leads to better model selection
properties in practically relevant settings. We let n grow, set p ? n? for various values of ?, and
apply the EBIC with ? ? {0, 0.5, 1} similarly to the choice made in the regression context by [3]. As
mentioned in the introduction, we first use the graphical lasso of [2] (as implemented in the ?glasso?
package for R) to define a small set of models to consider (details given below). From the selected
set we choose the model with the lowest EBIC. This is repeated for 100 trials for each combination
of values of n, p, ? in each scaling scenario. For each case, the average positive selection rate (PSR)
and false discovery rate (FDR) are computed.
We recall that the graphical lasso places an `1 penalty on the inverse covariance matrix. Given a
penalty ? ? 0, we obtain the estimate
? ? = arg min ?ln (?) + ?k?k1 .
?
?
3
(4)
Figure 1: The chain (top) and the ?double chain? (bottom) on 6 nodes.
(Here we may define k?k1 as the sum of absolute values of all entries, or only of off-diagonal entries; both variants are common). The `1 penalty promotes zeros in the estimated inverse covariance
? ? ; increasing the penalty yields an increase in sparsity. The ?glasso path?, that is, the set
matrix ?
of models recovered over the full range of penalties ? ? [0, ?), gives a small set of models which,
roughly, include the ?best? models at various levels of sparsity. We may therefore apply the EBIC to
this manageably small set of models (without further restriction to decomposable models). Consistency results on the graphical lasso require the penalty ? to satisfy bounds that involve measures of
regularity in the unknown matrix ?0 ; see [7]. Minimizing the EBIC can be viewed as a data-driven
method of tuning ?, one that does not require creation of test data.
While cross-validation does not generally have consistency properties for model selection (see [9]),
it is nevertheless interesting to compare our method to cross-validation. For the considered simulated
data, we start with the set of models from the ?glasso path?, as before, and then perform 100-fold
cross-validation. For each model and each choice of training set and test set, we fit the model to
the training set and then evaluate its performance on each sample in the test set, by measuring error
in predicting each individual node conditional on the other nodes and then taking the sum of the
squared errors. We note that this method is computationally much more intensive than the BIC or
EBIC, because models need to be fitted many more times.
3.1
Design
In our simulations, we examine the EBIC as applied to the case where the graph is a chain with node
j being connected to nodes j ?1, j +1, and to the ?double chain?, where node j is connected to nodes
j ? 2, j ? 1, j + 1, j + 2. Figure 1 shows examples of the two types of graphs, which have on the
order of p and 2p edges, respectively. For both the chain and the double chain, we investigate four
different scaling scenarios, with the exponent ? selected from {0.5, 0.9, 1, 1.1}. In each scenario,
we test n = 100, 200, 400, 800, and define p ? n? with the constant of proportionality chosen such
that p = 10 when n = 100 for better comparability.
In the case of a chain, the true inverse covariance matrix ?0 is tridiagonal with all diagonal entries
(?0 )j,j set equal to 1, and the entries (?0 )j,j+1 = (?0 )j+1,j that are next to the main diagonal
equal to 0.3. For the double chain, ?0 has all diagonal entries equal to 1, the entries next to the main
diagonal are (?0 )j,j+1 = (?0 )j+1,j = 0.2 and the remaining non-zero entries are (?0 )j,j+2 =
2
(?0 )j+2,j = 0.1. In both cases, the choices result in values for ?0 , ?max
and ?max that are bounded
uniformly in the matrix size p.
For each data set generated from N (0, ??1
0 ), we use the ?glasso? package [2] in R to compute the
?glasso path?. We choose 100 penalty values ? which are logarithmically evenly spaced between
?max (the smallest value which will result in a no-edge model) and ?max /100. At each penalty
? ? from (4) and define the model E? based on this estimate?s support. The R
value ?, we compute ?
? ? ). We may
routine also allows us to compute the unpenalized maximum likelihood estimate ?(E
then readily compute the EBIC from (1). There is no guarantee that this procedure will find the
model with the lowest EBIC along the full ?glasso path?, let alone among the space of all possible
models of size ? q. Nonetheless, it serves as a fast way to select a model without any manual tuning.
3.2
Results
Chain graph: The results for the chain graph are displayed in Figure 2. The figure shows the positive
selection rate (PSR) and false discovery rate (FDR) in the four scaling scenarios. We observe that,
for the larger sample sizes, the recovery of the non-zero coefficients is perfect or nearly perfect for all
three values of ?; however, the FDR rate is noticeably better for the positive values of ?, especially
4
for higher scaling exponents ?. Therefore, for moderately large n, the EBIC with ? = 0.5 or ? = 1
performs very well, while the ordinary BIC0 produces a non-trivial amount of false positives. For
100-fold cross-validation, while the PSR is initially slightly higher, the growing FDR demonstrates
the extreme inconsistency of this method in the given setting.
Double chain graph: The results for the double chain graph are displayed in Figure 3. In each
of the four scaling scenarios for this case, we see a noticeable decline in the PSR as ? increases.
Nonetheless, for each value of ?, the PSR increases as n and p grow. Furthermore, the FDR for the
ordinary BIC0 is again noticeably higher than for the positive values of ?, and in the scaling scenarios ? ? 0.9, the FDR for BIC0 is actually increasing as n and p grow, suggesting that asymptotic
consistency may not hold in these cases, as is supported by our theoretical results. 100-fold crossvalidation shows significantly better PSR than the BIC and EBIC methods, but the FDR is again
extremely high and increases quickly as the model grows, which shows the unreliability of crossvalidation in this setting. Similarly to what Chen and Chen [3] conclude for the regression case,
it appears that the EBIC with parameter ? = 0.5 performs well. Although the PSR is necessarily
lower than with ? = 0, the FDR is quite low and decreasing as n and p grow, as desired.
For both types of simulations, the results demonstrate the trade-off inherent in choosing ? in the finite
(non-asymptotic) setting. For low values of ?, we are more likely to obtain a good (high) positive
selection rate. For higher values of ?, we are more likely to obtain a good (low) false discovery
rate. (In the Appendix, this corresponds to assumptions (5) and (6)). However, asymptotically, the
conditions (3) guarantee consistency, meaning that the trade-off becomes irrelevant for large n and
p. In the finite case, ? = 0.5 seems to be a good compromise in simulations, but the question of
determining the best value of ? in general settings is an open question. Nonetheless, this method
offers guaranteed asymptotic consistency for (known) values of ? depending only on n and p.
4
Discussion
We have proposed the use of an extended Bayesian information criterion for multivariate data generated by sparse graphical models. Our main result gives a specific scaling for the number of variables
p, the sample size n, the bound on the number of edges q, and other technical quantities relating to
the true model, which will ensure asymptotic consistency. Our simulation study demonstrates the
the practical potential of the extended BIC, particularly as a way to tune the graphical lasso. The
results show that the extended BIC with positive ? gives strong improvement in false discovery rate
over the classical BIC, and even more so over cross-validation, while showing comparable positive
selection rate for the chain, where all the signals are fairly strong, and noticeably lower, but steadily
increasing, positive selection rate for the double chain with a large number of weaker signals.
5
Appendix
We now sketch proofs of non-asymptotic versions of Theorems 1 and 2, which are formulated as
Theorems 3 and 4. We also give a non-asymptotic formulation of the Main Theorem; see Theorem 5.
In the non-asymptotic approach, we treat all quantities as fixed (e.g. n, p, q, etc.) and state precise
assumptions on those quantities, and then give an explicit lower bound on the probability of the
extended BIC recovering the model E0 exactly. We do this to give an intuition for the magnitude
of the sample size n necessary for a good chance of exact recovery in a given setting but due to the
proof techniques, the resulting implications about sample size are extremely conservative.
5.1
Preliminaries
We begin by stating two lemmas that are used in the proof of the main result, but are also more
generally interesting as tools for precise bounds on Gaussian and chi-square distributions. First, Cai
[10, Lemma 4] proves the following chi-square bound. For any n ? 1, ? > 0,
n
1
P {?2n > n(1 + ?)} ? ? e? 2 (??log(1+?)) .
? ?n
We can give an analagous left-tail upper bound. The proof is similar to Cai?s proof and omitted here.
We will refer to these two bounds together as (CSB).
5
Figure 2: Simulation results when the true graph is a chain.
Lemma 1. For any ? > 0, for n such that n ? 4??2 + 1,
n?1
1
P {?2n < n(1 ? ?)} ? p
e 2 (?+log(1??)) .
? ?(n ? 1)
Second, we give a distributional result about the sample correlation when sampling from a bivariate
normal distribution.
Lemma 2. Suppose (X1 , Y1 ), . . . , (Xn , Yn ) are independent draws from a bivariate normal distribution with zero mean, variances equal to one and covariance ?. Then the following distributional
equivalence holds, where A and B are independent ?2n variables:
n
X
1??
D 1+?
(A ? n) ?
(B ? n).
(Xi Yi ? ?) =
2
2
i=1
Proof. Let A1 , B1 , A2 , B2 , . . . , An , Bn be independent standard normal random variables. Define:
r
r
r
r
n
n
X
X
1+?
1??
1+?
1??
Xi =
Ai +
Bi ; Yi =
Ai ?
Bi ; A =
A2i ; B =
Bi2 .
2
2
2
2
i=1
i=1
Then the variables X1 , Y1 , X2 , Y2 , . . . , Xn , Yn have the desired
joint distribution, and A, B are inP
dependent ?2n variables. The claim follows from writing i Xi Yi in terms of A and B.
6
Figure 3: Simulation results when the true graph is a ?double chain?.
5.2
Non-asymptotic versions of the theorems
2
?max , ? = logn p, and
We assume the following two conditions, where 0 , 1 > 0, C ? ?max
1
?0 = ? ? (1 ? 4? ):
(p + 2q) log p ?2max
1
? 2 ?
n
?0
3200 max{1 + ?0 , 1 + 21 C 2 }
?
p
log log p + log(4 1 + ?0 ) + 1
2( 1 + ?0 ? 1) ?
? 0
2 log p
Theorem 3. Suppose assumption (5) holds. Then with probability at least 1 ?
E 6? E0 with |E| ? q,
?
ln (?0 ) ? ln (?(E))
> 2q(log p)(1 + ?0 ).
(5)
(6)
? 1
p?1 ,
? log p
for all
Proof. We sketch a proof along the lines of the proof of Theorem 2 in [6], using Taylor series
?
centered at the true ?0 to approximate the likelihood at ?(E).
The score and the negative Hessian
of the log-likelihood function in (2) are
d
n
d
n
ln (?) =
??1 ? S ,
Hn (?) = ?
sn (?) = ??1 ? ??1 .
d?
2
d?
2
Here, the symbol ? denotes the Kronecker product of matrices. Note that, while we require ? to be
symmetric positive definite, this is not reflected in the derivatives above. We adopt this convention
for the notational convenience in the sequel.
sn (?) =
7
?
Next, observe that ?(E)
has support on ? ? E0 ? E, and that by definition of ?0 , we have the lower
?
bound |?(E) ? ?0 |F ? ?0 in terms of the Frobenius norm. By concavity of the log-likelihood
function, it suffices to show that the desired inequality holds for all ? with support on ? ? E0 ? E
? on the path from ?0 to ?, we have:
with |? ? ?0 |F = ?0 . By Taylor expansion, for some ?
1
?
ln (?) ? ln (?0 ) = vec(? ? ?0 )T sn (?0 ) ? vec(? ? ?0 )T Hn (?)vec(?
? ?0 ).
2
Next, by (CSB) and Lemma 2, with probability at least 1 ? ?? 1log p e?1 log p , the following bound
holds for all edges e in the complete graph (we omit the details):
4
(sn (?0 ))2e ? 6?max
(2 + 1 )n log p.
Now assume that this bound holds for all edges. Fix some E as above, and fix ? with support on
? ? E0 ? E, with |? ? ?0 | = ?0 . Note that the support has at most (p + 2q) entries. Therefore,
4
|vec(? ? ?0 )T sn (?0 )|2 ? ?02 (p + 2q) ? 6?max
(2 + 1 )n log p.
Furthermore, the eigenvalues of ? are bounded by ?max + ?0 ? 2?max , and so by properties of
? is at least n (2?max )?2 . We conclude that
Kronecker products, the minimum eigenvalue of Hn (?)
2
q
n
1
4
ln (?) ? ln (?0 ) ? ?02 (p + 2q) ? 6?max
(2 + 1 )n log p ? ?02 ? (2?max )?2 .
2
2
Combining this bound with our assumptions above, we obtain the desired result.
Theorem 4. Suppose additionally that assumption (6) holds (in particular, this implies that ? >
p?0
1
1 ? 4?
). Then with probability at least 1 ? 4??1log p 1?p
?0 , for all decomposable models E such
that E ) E0 and |E| ? q,
?
? 0 )) < 2(1 + ?0 )(|E| ? |E0 |) log p.
ln (?(E))
? ln (?(E
?
Proof. First, fix a single such model E, and define m = |E| ? |E0 |. By [8, 11], ln (?(E))
?
? 0 )) is distributed as ? n log (Qm Bi ), where Bi ? Beta( n?ci , 1 ) are independent random
ln (?(E
i=1
2
2
2
variables and the constants c1 , . . . , cm ?
are bounded by 1 less than the maximal clique size of the
graph given by model E, implying ci ? 2q for each i. Also shown in [8] is the stochastic inequality
? log(Bi ) ? n?c1i ?1 ?21 . It follows that, stochastically,
n
1
?
?
?2 .
2
n ? 2q ? 1 m
Finally, combining the assumptions on n, p, q and the (CSB) inequalities, we obtain:
0
m
?
? 0 )) ? 2(1 + ?0 )m log(p)} ? ? 1
e? 2 (4(1+ 2 ) log p) .
P {ln (?(E))
? ln (?(E
4 ? log p
?
? 0 )) ?
ln (?(E))
? ln (?(E
Next, note that the number of models |E| with E ? E0 and |E| ? |E0 | = m is bounded by p2m .
Taking the union bound over all choices of m and all choices of E with that given m, we obtain that
the desired result holds with the desired probability.
We are now ready to give a non-asymptotic version of the Main Theorem. For its proof apply the
union bound to the statements in Theorems 3 and 4, as in the asymptotic proof given in section 2.
Theorem 5. Suppose assumptions (5) and (6) hold. Let E be the set of subsets E of edges between the p nodes, satisfying |E| ? q and representing a decomposable model. Then it holds with
p?0
? 1
probability at least 1 ? 4??1log p 1?p
p?1 that
?0 ?
? log p
E0 = arg min BIC? (E).
E?E
That is, the extended BIC with parameter ? selects the smallest true model.
Finally, we note that translating the above to the asymptotic version of the result is simple. If the
conditions (3) hold, then for sufficiently large n (and thus sufficiently large p), assumptions (5) and
(6) hold. Furthermore, although we may not have the exact equality ? = logn p, we will have
logn p ? ?; this limit will be sufficient for the necessary inequalities to hold for sufficiently large
n. The proofs then follow from the non-asymptotic results.
8
References
[1] Steffen L. Lauritzen. Graphical models, volume 17 of Oxford Statistical Science Series. The
Clarendon Press Oxford University Press, New York, 1996. Oxford Science Publications.
[2] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation
with the graphical lasso. Biostatistics, 9(3):432?441, 2008.
[3] Jiahua Chen and Zehua Chen. Extended Bayesian information criterion for model selection
with large model space. Biometrika, 95:759?771, 2008.
[4] Gideon Schwarz. Estimating the dimension of a model. Ann. Statist., 6(2):461?464, 1978.
[5] Malgorzata Bogdan, Jayanta K. Ghosh, and R. W. Doerge. Modifying the Schwarz Bayesian
information criterion to locate multiple interacting quantitative trait loci. Genetics, 167:989?
999, 2004.
[6] Jiahua Chen and Zehua Chen. Extended BIC for small-n-large-p sparse GLM. Preprint.
[7] Pradeep Ravikumar, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu.
Highdimensional covariance estimation by minimizing `1 -penalized log-determinant divergence.
arXiv:0811.3628, 2008.
[8] B. T. Porteous. Stochastic inequalities relating a class of log-likelihood ratio statistics to their
asymptotic ?2 distribution. Ann. Statist., 17(4):1723?1734, 1989.
[9] Jun Shao. Linear model selection by cross-validation. J. Amer. Statist. Assoc., 88(422):486?
494, 1993.
[10] T. Tony Cai. On block thresholding in wavelet regression: adaptivity, block size, and threshold
level. Statist. Sinica, 12(4):1241?1273, 2002.
[11] P. Svante Eriksen. Tests in covariance selection models. Scand. J. Statist., 23(3):275?284,
1996.
9
| 4087 |@word trial:1 determinant:1 version:4 stronger:1 seems:1 norm:1 proportionality:1 open:1 simulation:8 bn:1 covariance:10 contains:1 series:2 score:1 recovered:2 comparing:2 must:1 readily:1 numerical:1 chicago:2 alone:1 implying:1 selected:2 node:11 firstly:1 along:2 beta:2 prove:1 introduce:1 theoretically:1 indeed:3 roughly:1 nor:1 growing:2 examine:1 steffen:1 chi:2 decreasing:1 landau:1 cardinality:1 increasing:3 becomes:1 begin:1 estimating:1 notation:3 bounded:4 biostatistics:1 lowest:2 what:1 cm:1 ghosh:1 guarantee:2 quantitative:1 growth:1 exactly:2 biometrika:1 demonstrates:3 qm:1 assoc:1 omit:1 unreliability:1 yn:2 positive:14 before:1 understood:1 treat:2 tends:1 limit:1 oxford:3 path:5 studied:1 equivalence:1 range:1 bi:5 practical:1 practice:1 union:2 definite:2 block:2 procedure:2 significantly:1 word:1 inp:1 convenience:1 selection:14 context:2 applying:1 writing:1 restriction:1 decomposable:11 recovery:2 oh:2 searching:1 suppose:9 exact:2 element:1 logarithmically:1 satisfying:1 jk:2 particularly:1 distributional:2 bottom:1 preprint:1 rina:2 connected:4 decrease:1 trade:2 mentioned:2 intuition:1 moderately:2 mine:1 compromise:1 creation:1 shao:1 joint:1 various:2 distinct:1 fast:1 choosing:1 exhaustive:4 whose:2 heuristic:1 larger:3 quite:1 loglikelihood:1 statistic:1 itself:1 eigenvalue:3 cai:3 product:4 maximal:1 jayanta:1 relevant:1 combining:2 frobenius:1 crossvalidation:2 double:8 regularity:1 produce:2 perfect:2 bogdan:1 depending:1 stating:1 lauritzen:1 noticeable:1 strong:2 recovering:2 implemented:1 implies:2 convention:1 modifying:1 stochastic:2 centered:1 translating:1 noticeably:3 bin:1 require:3 fix:4 suffices:1 preliminary:1 secondly:1 hold:18 practically:1 sufficiently:3 considered:1 normal:4 claim:1 vary:1 adopt:1 smallest:6 omitted:1 a2:1 estimation:2 schwarz:2 tool:2 gaussian:10 publication:1 conjunction:1 improvement:2 notational:1 modelling:1 likelihood:13 inference:2 dependent:1 initially:1 selects:2 arg:3 among:1 logn:3 exponent:2 fairly:1 equal:4 sampling:1 yu:1 nearly:1 future:1 minimized:1 inherent:1 employ:1 modern:1 simultaneously:1 divergence:1 individual:2 replaced:1 friedman:1 drton:2 interest:2 investigate:1 extreme:1 pradeep:1 light:1 chain:16 implication:1 edge:16 necessary:4 indexed:3 taylor:3 desired:8 e0:43 theoretical:4 fitted:1 earlier:1 cover:1 measuring:1 ordinary:3 entry:13 subset:2 tridiagonal:1 sequel:2 off:4 together:1 quickly:1 squared:1 again:2 containing:2 choose:4 hn:3 stochastically:1 derivative:1 suggesting:1 potential:1 unordered:1 b2:1 coefficient:1 analagous:1 satisfy:1 start:1 square:2 variance:2 maximized:1 yield:2 spaced:1 bayesian:8 manual:1 trevor:1 definition:2 nonetheless:3 steadily:1 associated:2 proof:15 treatment:2 recall:1 realm:1 routine:1 actually:1 appears:1 clarendon:1 higher:4 follow:1 reflected:1 improved:1 formulation:1 amer:1 generality:1 furthermore:3 correlation:1 jerome:1 sketch:2 grows:2 verify:1 true:13 contain:1 y2:1 equality:1 symmetric:1 criterion:15 complete:1 demonstrate:3 performs:3 meaning:2 common:1 garvesh:1 tending:4 raskutti:1 volume:1 tail:1 interpretation:2 relating:2 trait:1 significant:2 refer:3 vec:4 ai:2 tuning:4 consistency:10 similarly:3 etc:1 multivariate:2 moderate:1 driven:1 irrelevant:1 scenario:8 inequality:5 inconsistency:1 yi:3 minimum:2 signal:3 multiple:1 full:2 technical:1 cross:7 offer:1 preselecting:1 e1:4 ravikumar:1 promotes:1 a1:1 variant:1 regression:6 arxiv:1 sometimes:1 c1:1 background:1 grow:5 integer:1 identically:2 concerned:1 independence:1 bic:22 fit:1 hastie:1 lasso:11 idea:1 simplifies:1 decline:1 intensive:1 det:1 expression:1 penalty:8 proceed:1 hessian:1 jj:1 york:1 useful:1 generally:2 covered:1 involve:1 tune:1 amount:1 statist:5 p2m:1 estimated:1 tibshirani:1 write:1 four:3 nevertheless:1 threshold:1 graph:23 asymptotically:2 eriksen:1 sum:2 inverse:6 package:2 place:1 reader:1 draw:1 appendix:3 scaling:7 comparable:2 bound:16 guaranteed:1 fold:3 precisely:1 infinity:1 kronecker:2 x2:1 min:4 extremely:2 expanded:1 martin:1 according:1 combination:4 smaller:1 slightly:1 restricted:2 glm:2 ln:27 computationally:1 foygel:1 locus:1 serf:1 studying:1 available:1 apply:3 observe:4 a2i:1 denotes:2 top:1 include:1 porteous:2 remaining:1 graphical:21 ensure:1 tony:1 k1:2 especially:1 establish:1 prof:1 classical:2 objective:2 question:2 quantity:4 dependence:1 diagonal:8 comparability:1 simulated:2 evenly:1 topic:1 trivial:1 index:1 scand:1 ratio:2 minimizing:2 sinica:1 robert:1 statement:1 holding:1 trace:1 negative:1 suppress:1 design:1 fdr:8 unknown:1 perform:1 upper:2 observation:1 finite:2 displayed:2 extended:13 precise:2 y1:2 locate:1 csb:3 interacting:1 pair:2 distinction:1 address:1 beyond:1 below:1 sparsity:3 gideon:1 including:1 max:27 unpenalized:1 wainwright:1 bi2:1 c1i:1 predicting:1 representing:1 ready:1 jun:1 kj:1 sn:5 discovery:4 determining:1 asymptotic:15 loss:1 glasso:6 adaptivity:1 interesting:3 analogy:1 penalization:2 validation:7 sufficient:1 xp:1 consistent:3 thresholding:1 genetics:1 penalized:1 supported:2 infeasible:1 uchicago:2 weaker:1 taking:2 absolute:1 sparse:3 distributed:3 dimension:3 xn:3 concavity:1 made:2 approximate:1 implicitly:1 gene:1 clique:1 b1:1 conclude:2 svante:1 xi:5 search:5 additionally:1 symmetry:1 expansion:1 necessarily:1 main:11 big:1 scored:1 allowed:1 repeated:1 x1:4 zehua:2 position:2 comprises:2 explicit:1 candidate:1 wavelet:1 theorem:21 specific:1 showing:1 symbol:2 bivariate:2 false:6 ci:2 magnitude:1 chen:8 jiahua:2 likely:2 psr:7 expressed:1 corresponds:2 nested:1 satisfies:1 chance:1 conditional:2 viewed:1 formulated:1 ann:2 determined:1 uniformly:1 lemma:5 conservative:1 mathias:1 select:2 highdimensional:1 support:5 latter:2 evaluate:1 |
3,410 | 4,088 | Hashing Hyperplane Queries to Near Points
with Applications to Large-Scale Active Learning
Sudheendra Vijayanarasimhan
Department of Computer Science
University of Texas at Austin
[email protected]
Prateek Jain
Algorithms Research Group
Microsoft Research, Bangalore, India
[email protected]
Kristen Grauman
Department of Computer Science
University of Texas at Austin
[email protected]
Abstract
We consider the problem of retrieving the database points nearest to a given hyperplane query without exhaustively scanning the database. We propose two hashingbased solutions. Our first approach maps the data to two-bit binary keys that
are locality-sensitive for the angle between the hyperplane normal and a database
point. Our second approach embeds the data into a vector space where the Euclidean norm reflects the desired distance between the original points and hyperplane query. Both use hashing to retrieve near points in sub-linear time. Our
first method?s preprocessing stage is more efficient, while the second has stronger
accuracy guarantees. We apply both to pool-based active learning: taking the
current hyperplane classifier as a query, our algorithm identifies those points (approximately) satisfying the well-known minimal distance-to-hyperplane selection
criterion. We empirically demonstrate our methods? tradeoffs, and show that they
make it practical to perform active selection with millions of unlabeled points.
1
Introduction
Efficient similarity search with large databases is central to many applications of interest, such as
example-based learning algorithms, content-based image or audio retrieval, and quantization-based
data compression. Often the search problem is considered in the domain of point data: given a
database of vectors listing some attributes of the data objects, which points are nearest to a novel
query vector? Existing algorithms provide efficient data structures for point-to-point retrieval tasks
with various useful distance functions, producing either exact or approximate near neighbors while
forgoing a brute force scan through all database items, e.g., [1, 2, 3, 4, 5, 6, 7].
By comparison, much less work considers how to efficiently handle instances more complex than
points. In particular, little previous work addresses the hyperplane-to-point search problem: given
a database of points, which are nearest to a novel hyperplane query? This problem is critical to
pool-based active learning, where the goal is to request labels for those points that appear most
informative. The widely used margin-based selection criterion of [8, 9, 10] seeks those points that are
nearest to the current support vector machine?s hyperplane decision boundary, and can substantially
reduce total human annotation effort. However, for large-scale active learning, it is impractical to
exhaustively apply the classifier to all unlabeled points at each round of learning; to exploit massive
unlabeled pools, a fast (sub-linear time) hyperplane search method is needed.
1
To this end, we propose two solutions for approximate hyperplane-to-point search. For each, we
introduce randomized hash functions that offer query times sub-linear in the size of the database, and
provide bounds for the approximation error of the neighbors retrieved. Our first approach devises
a two-bit hash function that is locality-sensitive for the angle between the hyperplane normal and a
database point. Our second approach embeds the inputs such that the Euclidean distance reflects the
hyperplane distance, thereby making them searchable with existing approximate nearest neighbor
algorithms for vector data. While the preprocessing in our first method is more efficient, our second
method has stronger accuracy guarantees.
We demonstrate our algorithms? significant practical impact for large-scale active learning with
SVM classifiers. Our results show that our method helps scale-up active learning for realistic problems with massive unlabeled pools on the order of millions of examples.
2
Related Work
We briefly review related work on approximate similarity search, subspace search methods, and
pool-based active learning.
Approximate near-neighbor search. For low-dimensional points, spatial decomposition and treebased search algorithms can provide the exact neighbors in sub-linear time [1, 2]. While such
methods break down for high-dimensional data, a number of approximate near neighbor methods
have been proposed that work well with high-dimensional inputs. Locality-sensitive hashing (LSH)
methods devise randomized hash functions that map similar points to the same hash buckets, so that
only a subset of the database must be searched after hashing a novel query [3, 4, 5]. A related family
of methods design Hamming space embeddings that can be indexed efficiently (e.g., [11, 12, 6]).
However, in contrast to our approach, all such techniques are intended for vector/point data.
A few researchers have recently examined approximate search tasks involving subspaces. In [13], a
Euclidean embedding is developed such that the norm in the embedding space directly reflects the
principal angle-based distance between the original subspaces. After this mapping, one can apply
existing approximate near-neighbor methods designed for points (e.g., LSH). We provide a related
embedding to find the points nearest to the hyperplane; however, in contrast to [13], we provide LSH
bounds, and our embedding is more compact due to our proposed sampling strategy. Another method
to find the nearest subspace for a point query is given in [14], though it is limited to relatively low2
dimensional data due to its preprocessing time/space requirement of O(N d log N ) and query time
10
of O(d log N ), where N is the number of database points and d is the dimensionality of the data.
Further, unlike [13], that approach is restricted to point queries. Finally, a sub-linear time method to
map a line query to its nearest points is derived in [15]. In contrast to all the above work, we propose
specialized methods for the hyperplane search problem, and show that they handle high-dimensional
data and large databases very efficiently.
Margin-based active learning. Existing active classifier learning methods for pool-based selection
generally scan all database instances before selecting which to have labeled next.1 One well-known
and effective active selection criterion for support vector machines (SVMs) is to choose points that
are nearest to the current separating hyperplane [8, 9, 10]. While simple, this criterion is intuitive,
has theoretical basis in terms of rapidly reducing the version space [8], and thus is widely used
in practice (e.g., [17, 18, 19]). Unfortunately, even for inexpensive selection functions, very large
unlabeled datasets make the cost of exhaustively searching the pool impractical. Researchers have
previously attempted to cope with this issue by clustering or randomly downsampling the pool [19,
20, 21, 22]; however, such strategies provide no guarantees as to the potential loss in active selection
quality. In contrast, when applying our approach for this task, we can consider orders of magnitude
fewer points when making the next active label request, yet guarantee selections within a known
error of the traditional exhaustive pool-based technique.
Other forms of approximate SVM training. To avoid potential confusion, we note that our problem setting differs from both that considered in [23], where computational geometry insights are
combined with the QP formulation for more efficient ?core vector? SVM training, as well as that
considered in [19], where a subset of labeled data points are selected for online LASVM training.
1
We consider only a specific hyperplane criterion in this paper; see [16] for an active learning survey.
2
3
Approach
We consider the following retrieval problem. Given a database D = [x1 , . . . , xN ] of N points in
Rd , the goal is to retrieve the points from the database that are closest to a given hyperplane query
whose normal is given by w ? Rd . We call this the nearest neighbor to a query hyperplane (NNQH)
problem. Without loss of generality, we assume that the hyperplane passes through origin, and that
each xi , w is unit norm. We see in later sections that these assumptions do not affect our solution.
The Euclidean distance of a point x to a given hyperplane hw parameterized by normal w is:
d(hw , x) = k(xT w)wk = |xT w|.
(1)
Thus, the goal for the NNQH problem is to identify those points xi ? D that minimize |xTi w|. Note
that this is in contrast to traditional proximity problems, e.g., nearest or farthest neighbor retrieval,
where the goal is to maximize xT w or ?xT w, respectively. Hence, existing approaches are not
directly applicable to this problem.
We formulate two algorithms for NNQH. Our first approach maps the data to binary keys that are
locality-sensitive for the angle between the hyperplane normal and a database point, thereby permitting sub-linear time retrieval with hashing. Our second approach computes a sparse Euclidean
embedding for the query hyperplane that maps the desired search task to one handled well by existing approximate nearest-point methods.
In the following, we first provide necessary background on locality-sensitive hashing (LSH). The
subsequent two sections describe each approach in turn, and Sec. 3.4 reviews their trade-offs. Finally, in Sec. 3.5, we explain how either method can be applied to large-scale active learning.
3.1
Background: Locality-Sensitive Hashing (LSH)
Informally, LSH [3] requires randomized hash functions guaranteeing that the probability of collision of two vectors is inversely proportional to their ?distance?, where ?distance? is defined according to the task at hand. Since similar points are assured (w.h.p.) to fall into the same hash bucket,
one need only search those database items with which a novel query collides in the hash table.
Formally, let d(?, ?) be a distance function over items from a set S, and for any item p ? S, let
B(p, r) denote the set of examples from S within radius r from p.
Definition 3.1. [3] Let hH denote a random choice of a hash function from the family H. The family
H is called (r, r(1 + ?), p1 , p2 )?sensitive for d(?, ?) when, for any q, p ? S,
? if p ? B(q, r) then Pr[hH (q) = hH (p)] ? p1 ,
? if p ?
/ B(q, r(1 + ?)) then Pr[hH (q) = hH (p)] ? p2 .
For a family of functions to be useful, it must satisfy p1 > p2 . A k-bit LSH function computes
a hash ?key? by concatenating
the bits returned by a random sampling of H: g(p) =
i
h
(2)
(k)
(1)
hH (p), hH (p), . . . , hH (p) . Note that the probability of collision for close points is thus at least
pk1 , while for dissimilar points it is at most pk2 . During a preprocessing stage, all database points are
mapped to a series of l hash tables indexed by independently constructed g1 , . . . , gl , where each gi
is a k-bit function. Then, given a query q, an exhaustive search is carried out only on those examples
in the union of the l buckets to which q hashes. These candidates contain the (r, ?)-nearest neighbors
(NN) for q, meaning if q has a neighbor within radius r, then with high probability some example
within radius r(1 + ?) is found.
In [3] an LSH scheme using projections onto single coordinates is shown to be locality-sensitive for
log p1
1
?
the Hamming distance over vectors. For that hash function, ? = log
p2 ? 1+? , and using l = N
1
hash tables, a (1+?)-approximate solution can be retrieved in time O(N (1+?) ). Related formulations
and LSH functions for other distances have been explored (e.g., [5, 4, 24]). Our contribution is to
define two locality-sensitive hash functions for the NNQH problem.
3
3.2
Hyperplane Hashing based on Angle Distance (H-Hash)
Recall that we want to retrieve the database vector(s) x for which |wT x| is minimized. If the
vectors are unit norm, then this means that for the ?good? (close) database vectors, w and x are
almost perpendicular. Let ?x,w denote the angle between x and w. We define the distance d(?, ?) in
Definition 3.1 to reflect how far from perpendicular w and x are:
d? (x, w) = (?x,w ? ?/2)2 .
(2)
Consider the following two-bit function that maps two input vectors a, b ? ?d to {0, 1}2 :
hu,v (a, b) = [hu (a), hv (b)] = [sign(uT a), sign(v T b)],
(3)
where hu (a) = sign(uT a) returns 1 if uT a ? 0, and 0 otherwise, and u and v are sampled
independently from a standard d-dimensional Gaussian, i.e., u, v ? N (0, I).
We define our hyperplane hash (H-Hash) function family H as:
?
hu,v (z, z),
if z is a database point vector,
hH (z) =
hu,v (z, ?z), if z is a query hyperplane vector.
Next, we prove that this family of hash functions is locality-sensitive (Definition 3.1).
?
?
Claim 3.2. The family H is r, r(1 + ?), 41 ? ?12 r, 14 ? ?12 r(1 + ?) -sensitive for the distance
d? (?, ?), where r, ? > 0.
Proof. Since the vectors u, v used by hash function hu,v are sampled independently, then for a
query hyperplane vector w and a database point vector x,
Pr[hH (w) = hH (x)] = Pr[hu (w) = hu (x) and hv (?w) = hv (x)],
= Pr[hu (w) = hu (x)] Pr[hv (?w) = hv (x)].
(4)
Next, we use the following fact proven in [25],
?a,c
,
(5)
?
where u is sampled as defined above, and ?a,c denotes the angle between the two vectors a and c.
Pr[sign(uT a) = sign(uT c)] = 1 ?
Using (4) and (5), we get:
?
?
?x,w
1
? ?2
1 ?
?x,w
1?
= ? 2 ?x,w ?
.
Pr[hH (w) = hH (x)] =
?
?
4 ?
2
?
?2
Hence, when ?x,w ? ?2 ? r, Pr[hH (w) = hH (x)] ? 14 ? ?r2 = p1 . Similarly, for any ? > 0
?
?2
such that ?x,w ? ?2 ? r(1 + ?), Pr[hH (w) = hH (x)] ? 41 ? r(1+?)
= p2 .
?2
We note that unlike traditional LSH functions, ours are asymmetric. That is, to hash a database point
x we use hu,v (x, x), whereas to hash a query hyperplane w, we use hu,v (w, ?w). The purpose of
the two-bit hash is to constrain the angle with respect to both w and ?w, so that we do not simply
retrieve examples for which we know only that x is ?/2 or less away from w.
With these functions in hand, we can now form hash keys by concatenating k two-bit pairs from k
hash functions from H, store the database points in the hash tables, and query with a novel hyperplane to retrieve its closest points (see Sec. 3.1).
The approximation guarantees and correctness of this scheme can be obtained by adapting the proof
of Theorem 1 in [3] (see supplementary file). In particular, we can show that with high probability,
our LSH scheme will return a point within a distance (1 + ?)r, where r = mini d? (xi , w), in time
p1
O(N ? ), where ? = log
log p2 . As p1 > p2 , we have ? < 1, i.e., the approach takes sub-linear time
for all values of r, ?. Furthermore, as p1 =
as ? ?
4r
1?log(1? ?
2)
?
1+
2
1+ ? log 4
4r
1
4
?
r
?2 ,
and p2 =
1
4
?
r(1+?)
?2 ,
? can also be bounded
. Note that this bound for ? is dependent on r, and is more efficient for larger
values of r. See the supplementary material for more discussion on the bound.
4
3.3
Embedded Hyperplane Hashing based on Euclidean Distance (EH-Hash)
Our second approach for the NNQH problem relies on a Euclidean embedding for the hyperplane
and points. It offers stronger bounds than the above, but at the expense of more preprocessing.
Given a d-dimensional vector a, we compute an embedding inspired by [13] that yields a d2 dimensional vector by vectorizing the corresponding rank-1 matrix aaT :
?
?
(6)
V (a) = vec(aaT ) = a21 , a1 a2 , . . . , a1 ad , a22 , a2 a3 , . . . , a2d ,
where ai denotes the i-th element of a. Assuming a and b to be unit vectors, the Euclidean distance
between the embeddings V (a) and ?V (b) is given by ||V (a) ? (?V (b))||2 = 2 + 2(aT b)2 .
Hence, minimizing the distance between the two embeddings is equivalent to minimizing |aT b|,
our intended function.
Given this, we define our embedding-hyperplane hash (EH-Hash) function family E as:
?
hu (V (z)) ,
if z is a database point vector,
hE (z) =
hu (?V (z)) , if z is a query hyperplane vector,
where hu (z) = sign(uT z) is a one-bit hash function parameterized by u ? N (0, I).
Claim
3.3.
The
family
of
functions ? E
defined
above
is
?
p
?
r, r(1 + ?), ?1 cos?1 sin2 ( r), ?1 cos?1 sin2 ( r(1 + ?)) -sensitive for d? (?, ?), where r, ? > 0.
Proof. Using the result of [25], for any vector w, x ? Rd ,
?
?
?
?
?
?
??
1
?V (w)T V (x)
Pr sign uT (?V (w)) = sign uT V (x) = 1 ? cos?1
,
?
kV (w)k kV (x)k
(7)
2
where u ? Rd is sampled from a standard d2 -variate Gaussian distribution, u ? N (0, I). Note
2
that for any unit vectors a, b ? Rd , V (a)T V (b) = Tr(aaT bbT ) = (aT b)2 = cos2 ?a,b .
Using (7) together with the definition of hE above, given a hyperplane query w and database point
x we have:
?
?
?
?
1
Pr[hE (w) = hE (x)] = 1 ? cos?1 ? cos2 (?x,w ) = cos?1 cos2 (?x,w ) /?
(8)
?
Hence, when (?x,w ? ?2 )2 ? r,
?
1
Pr[hE (w) = hE (x)] ?
(9)
cos?1 sin2 ( r) = p1 ,
?
and p2 is obtained similarly.
We observe that this p1 behaves similarly to 2( 14 ? ?r2 ). That is, as r varies, EH-Hash?s p1 returns
values close to twice those returned by H-Hash?s p1 (see plot illustrating this in supplementary file).
p1
Hence, the factor ? = log
log p2 improves upon that of the previous section, remaining lower for lower
values of ?, and leading to better approximation guarantees. See supplementary material for a more
detailed comparison of the two bounds.
On the other hand, EH-Hash?s hash functions are significantly more expensive to compute. Specifically, it requires O(d2 ) time, whereas H-Hash requires only O(d). To alleviate this problem, we
use a form of randomized sampling when computing the hash bits for a query that reduces the time
2
to O(1/?? ), for ?? > 0. Our method relies on the following lemma, which states that sampling a
vector v according to the weights of each element leads to good approximation to v T y for any vector y (with constant probability). Similar sampling schemes have been used for a variety of matrix
approximation problems (see [26]).
Lemma 3.4. Let v ? Rd and define pi = vi2 /kvk2 . Construct v? ? Rd such that the i-th element is
vi with probability pi and is 0 otherwise. Select t such elements using sampling with replacement.
Then, for any y ? Rd , ? > 0, c ? 1, t ? ?c? 2 ,
1
Pr[|?
v T y ? v T y| ? ?? kvk2 kyk2 ] > 1 ? .
c
5
(10)
We defer the proof to the supplementary material. The lemma implies that at query time our hash
function hE (w) can be computed while incurring a small additive error in time O( ?1? 2 ), by sampling
its embedding V (w) accordingly, and then cycling through only the non-zero indices of V (w) to
compute uT (?V (w)). Note that we can substantially reduce the error in the hash function compu? T ) as the embedding
tation by sampling O( ?1?2 ) elements of the vector w and then using vec(ww
for w. However, in this case, the computational requirements increase to O( ?d? 2 ).
While one could alternatively use the Johnson-Lindenstrauss (JL) lemma to reduce the dimensionality of the embedding with random projections, doing so has two major difficulties: first, the d ? 1
dimensionality of a subspace represented by a hyperplane implies the random projection dimensionality must still be large for the JL-lemma to hold, and second, the projection dimension is dependent
on the sum of the number of database points and query hyperplanes. The latter is problematic when
fielding an arbitrary number of queries over time or storing a growing database of points?both properties that are intrinsic to our target active learning application. In contrast, our sampling method is
instance-dependent and incurs very little overhead for computing the hash function.
Comparison to [13]. Basri et al. define embeddings for finding nearest subspaces [13]. In particular,
they define Euclidean embeddings for affine subspace queries and database points which could be
used for NNQH, although they do not specifically apply it to hyperplane-to-point search in their
work. Also, their embedding is not tied to LSH bounds in terms of the distance function (2), as we
have shown above. Finally, our proposed instance-specific sampling strategy offers a more compact
representation with the advantages discussed above.
3.4
Recap of the Hashing Approaches
To summarize, we presented two locality-sensitive hashing approaches for the NNQH problem. Our
first H-Hash approach defines locality-sensitivity in the context of NNHQ, and then provides suitable two-bit hash functions together with a bound on retrieval time. Our second EH-Hash approach
consists of a d2 -dimensional Euclidean embedding for vectors of dimension d that in turn reduces
NNHQ to the Euclidean space nearest neighbor problem, for which efficient search structures (including LSH) are available. While EH-Hash has better bounds than H-Hash, its hash functions are
more expensive. To mitigate the expense for high-dimensional data, we use a well-justified heuristic
where we randomly sample the given query embedding, reducing the query time to linear in d.
Note that both of our approaches attempt to minimize d? (w, x) between the retrieved x and the
hyperplane w. Since that distance is only dependent on the angle between x and w, any scaling of
the vectors do not effect our methods, and we can safely treat the provided vectors to be unit norm.
3.5
Application to Large-Scale Active Learning
The search algorithms introduced above can be applied for any task fitting their query/database
specifications. We are especially interested in their relevance for making active learning scalable.
A practical paradox with pool-based active learning algorithms is that their intended value?to reduce learning time by choosing informative examples to label first?conflicts with the real expense
of applying them to very large ?unprepared? unlabeled datasets. Generally methods today are tested
in somewhat canned scenarios: the implementor has a moderately sized labeled dataset, and simply
withholds the labels from the learner until a given point is selected, at which point the ?oracle? reveals the label. In reality, one would like to deploy an active learner on a massive truly unlabeled
data pool (e.g., all documents on the Web) and let it crawl for the instances that appear most valuable
for the target classification task. The problem is that a scan of millions of points is rather expensive
to compute exhaustively, and thus defeats the purpose of improving overall learning efficiency.
Our algorithms make it possible to benefit from both massive unlabeled collections as well as
actively chosen label requests. We consider the ?simple margin? selection criterion for linear
SVM classifiers [8, 9, 10]. Given a hyperplane classifier and an unlabeled pool of vector data
U = {x1 , . . . , xN }, the point that minimizes the distance to the current decision boundary is selected for labeling: x? = argminxi ?U |wT xi |. Our two NNQH solutions supply exactly the hash
functions needed to rapidly identify the next point to label: first we hash the unlabeled database into
tables, and then at each active learning loop, we hash the current classifier w as a query.2
2
The SVM bias term is handled by appending points with a 1. Note, our approach assumes linear kernels.
6
|w.x|
(a)
iv
h
EH
st
Ex
h
au
?H
as
as
h
m
?H
do
(b)
e
e
iv
0
H
300
an
100
200
Selection iterations
1
0.5
R
0
0
10
ha
us
t
0.1
1.5
0
Ex
0.2
Distances to hyperplane
2
H
?H
as
h
Time (secs) ? log scale
AUROC Improvement (%)
0.3
Selection time
1
10
EH?Hash
H?Hash
Random
Exhaustive
EH
?H
as
h
Learning curves
0.4
(c)
Figure 1: Newsgroups results. (a) Improvements in prediction accuracy relative to the initial classifier,
averaged across all 20 categories and runs. (b) Time required to perform selection. (c) Value of |w T x| for
the selected examples. Lower is better. Both of our approximate methods (H-Hash and EH-Hash) significantly
outperform the passive baseline; they are nearly as accurate as ideal exhaustive active selection, yet require 1-2
orders of magnitude less time to select an example. (Best viewed in color.)
Learning curves ? All 10 classes
Selection time
2
10
Distances to hyperplane
2
(a)
|w.x|
e
tiv
ha
us
as
Ex
as
H
EH
?H
?H
h
m
do
an
us
h
e
tiv
h
as
(b)
0
R
300
ha
250
Ex
100
150
200
Selection iterations
?H
50
EH
?0.05
0
0
10
h
0
1
0.5
as
EH?Hash
H?Hash
Random
Exhaustive
1.5
1
10
?H
0.05
H
0.1
Time (secs) ? log scale
AUROC Improvement (%)
0.15
(c)
Figure 2: CIFAR-10 results. (a)-(c) Plotted as in above figure. Our methods compare very well with the
significantly more expensive exhaustive baseline. Our EH-Hash provides more accurate selection than our
H-Hash (see (c)), though requires noticeably more query time (see (b)).
4
Results
We demonstrate our approach applied to large-scale active learning tasks. We compare our methods
(H-Hash in Sec. 3.2 and EH-Hash in Sec. 3.3) to two baselines: 1) passive learning, where the next
label request is randomly selected, and 2) exhaustive active selection, where the margin criterion
in (1) is computed over all unlabeled examples in order to find the true minimum. The main goal
is to show our algorithms can retrieve examples nearly as well as the exhaustive approach, but with
substantially greater efficiency.
Datasets and implementation details. We use three publicly available datasets. 20 Newsgroups
consists of 20,000 documents from 20 newsgroup categories. We use the provided 61,118-d bag-ofwords features, and a test set of 7,505. CIFAR-10 [27] consists of 60,000 images from 10 categories.
It is a manually labeled subset of the 80 Million Tiny Image dataset [28], which was formed by
searching the Web for all English nouns and lacks ground truth labels. We use the provided train and
test splits of 50K and 10K images, respectively. Tiny-1M consists of the first 1,000,000 (unlabeled)
images from [28]. For both CIFAR-10 and Tiny-1M, we use the provided 384-d GIST descriptors as
features. For all datasets, we train a linear SVM in the one-vs-all setting using a randomly selected
labeled set (5 examples per class), and then run active selection for 300 iterations. We average results
across five such runs. We fix k = 300, N ? = 500, ?? = 0.01.
Newsgroups documents results. Figure 1 shows the results on the 20 Newsgroups, starting with
the learning curves for all four approaches (a). The active learners (exact and approximate) have the
steepest curves, indicating that they are learning more effectively from the chosen labels compared
to the random baseline. Both of our hashing methods perform similarly to the exhaustive selection,
yet require scanning an order of magnitude fewer examples (b). Note, Random requires ? 0 time.
Fig. 1(c) shows the actual values of |wT x| for the selected examples over all iterations, categories,
and runs; in line with our methods? guarantees, they select points close to those found with exhaustive search. We also observe the expected trade-off: H-Hash is more efficient, while EH-Hash
provides better results (only slightly better for this smaller dataset).
CIFAR-10 tiny image results. Figure 2 shows the same set of results on the CIFAR-10. The trends
are mostly similar to the above, although the learning task is more difficult on this data, narrowing the
7
H-Hash
Exhaustive
Random
All categories ? newsgroups
0.2
EH?Hash
H?Hash
random
exhaustive
0.1
0
0
100
200
300
400
500
Selection + labeling time (secs)
Improvement in AUROC (%)
Improvement in AUROC (%)
EH-Hash
All categories ? tinyimages
0.1
EH?Hash
H?Hash
random
exhaustive
0.05
0
0
2000
4000
Selection + labeling time (secs)
(b)
(a)
Figure 3: (a) First seven examples selected per method when learning the CIFAR-10 Airplane class. (b)
Improvements in prediction accuracy as a function of the total time taken, including both selection and labeling
time. By minimizing both selection and labeling time, our methods provide the best accuracy per unit time.
Selection time
Distances to hyperplane
2.5
airplane
(a)
e
tiv
us
ha
Ex
us
ha
Ex
0
10
h
e
tiv
h
as
?H
EH
R
an
do
m
0
1
as
1
0.5
automobile
10
?H
|w.x|
1.5
EH
Time (secs) ? log scale
2
(c)
(b)
Figure 4: Tiny-1M results. (a) Error of examples selected. (b) Time required. (c) Examples selected by
EH-Hash among 1M candidates in the first nine iterations when learning the Airplane and Automobile classes.
margin between active and random. Averaged over all classes, we happen to outperform exhaustive
selection (Fig. 2(a)); this can happen since there is no guarantee that the best active choice will help
test accuracy, and it also reflects the wider variation across per-class results. The boxplots in (c) more
directly show the hashing methods are behaving as expected. Both (b) and (c) illustrate their tradeoffs: EH-Hash has stronger guarantees than H-Hash (and thus retrieves lower wT x values), but is
more expensive. Figure 3(a) shows example image selection results; both exhaustive search and our
hashing methods manage to choose images useful for learning about airplanes/non-airplanes.
Figure 3(b) shows the prediction accuracy plotted against the total time taken per iteration, which
includes both selection and labeling time, for both datasets. We set the labeling time per instance
to 1 and 5 seconds for the Newsgroups and Tiny image datasets, respectively. (Note, however, that
these could vary in practice depending on the difficulty of the instance.) These results best show
the advantage of our approximate methods: accounting for both types of cost inherent to training
the classifier, they outperform both exhaustive and random selection in terms of the accuracy gains
per unit time. While exhaustive active selection suffers because of its large selection time, random
selection suffers because it wastes expensive labeling time on irrelevant examples. Our algorithms
provide the best accuracy gains by minimizing both selection and labeling time.
Tiny-1M results. Finally, to demonstrate the practical capability of our hyperplane hashing approach, we perform active selection on the one million tiny image set. We initialize the classifier
with 50 examples from CIFAR-10. The 1M set lacks any labels, making this a ?live? test of active
learning (we ourselves annotated whatever the methods selected). We use our EH-Hash method,
since it offers stronger performance.
Even on this massive collection, our method?s selections are very similar in quality to the exhaustive method (see Fig. 4(a)), yet require orders of magnitude less time (b). The images (c) show the
selections made from this large pool during the ?live? labeling test; among all one million unlabeled examples (nearly all of which likely belong to one of the other 1000s of classes) our method
retrieves seemingly relevant instances. To our knowledge, this experiment exceeds any previous
active selection results in the literature in terms of the scale of the unlabeled pool.
Conclusions. We introduced two methods for the NNQH search problem. Both permit efficient
large-scale search for points near to a hyperplane, and experiments with three datasets clearly
demonstrate the practical value for active learning with massive unlabeled pools. For future work,
we plan to further explore more accurate hash-functions for our H-hash scheme and also investigate
sublinear time methods for non-linear kernel based active learning.
This work is supported in part by DARPA CSSG, NSF EIA-0303609, and the Luce Foundation.
8
References
[1] J. Freidman, J. Bentley, and A. Finkel. An Algorithm for Finding Best Matches in Logarithmic Expected
Time. ACM Transactions on Mathematical Software, 3(3):209?226, September 1977.
[2] J. Uhlmann. Satisfying General Proximity / Similarity Queries with Metric Trees. Information Processing
Letters, 40:175?179, 1991.
[3] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in High Dimensions via Hashing. In Proceedings
of the 25th Intl Conf. on Very Large Data Bases, 1999.
[4] A. Andoni and P. Indyk. Near-Optimal Hashing Algorithms for Near Neighbor Problem in High Dimensions. In FOCS, 2006.
[5] M. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In STOC, 2002.
[6] Y. Weiss, A. Torralba, and R. Fergus. Spectral Hashing. In NIPS, 2008.
[7] B. Kulis and K. Grauman. Kernelized Locality-Sensitive Hashing for Scalable Image Search. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2009.
[8] S. Tong and D. Koller. Support Vector Machine Active Learning with Applications to Text Classification.
In Proccedings of International Conference on Machine Learning, 2000.
[9] G. Schohn and D. Cohn. Less is More: Active Learning with Support Vector Machines. In Proccedings
of International Conference on Machine Learning, 2000.
[10] C. Campbell, N. Cristianini, and A. Smola. Query Learning with Large Margin Classifiers. In Proccedings
of International Conference on Machine Learning, 2000.
[11] G. Shakhnarovich, P. Viola, and T. Darrell. Fast Pose Estimation with Parameter-Sensitive Hashing. In
Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2003.
[12] R. Salakhutdinov and G. Hinton. Semantic Hashing. In Proceedings of the SIGIR Workshop on Information Retrieval and Applications of Graphical Models, 2007.
[13] R. Basri, T. Hassner, and L. Zelnik-Manor. Approximate Nearest Subspace Search. PAMI, 2010.
[14] A. Magen. Dimensionality Reductions that Preserve Volumes and Distance to Affine Spaces, and their
Algorithmic Applications. In Randomization and Approximation Techniques in Computer Science, 2002.
[15] A. Andoni, P. Indyk, R. Krauthgamer, and H. L. Nguyen. Approximate Line Nearest Neighbor in High
Dimensions. In SODA, 2009.
[16] B. Settles. Active Learning Literature Survey. TR 1648, University of Wisconsin, 2009.
[17] E. Chang, S. Tong, K. Goh, and C. Chang. Support Vector Machine Concept-Dependent Active Learning
for Image Retrieval. In IEEE Transactions on Multimedia, 2005.
[18] M. K. Warmuth, J. Liao, G. Ratsch, M. Mathieson, S. Putta, and C. Lemmen. Active Learning with
Support Vector Machines in the Drug Discovery Process. J. Chem. Inf. Comput. Sci., 43:667?673, 2003.
[19] A. Bordes, S. Ertekin, J. Weston, and L. Bottou. Fast Kernel Classifiers with Online and Active Learning.
Journal of Machine Learning Research (JMLR), 6:1579?1619, September 2005.
[20] N. Panda, K. Goh, and E. Chang. Active Learning in Very Large Image Databases. Journal of Multimedia
Tools and Applications: Special Issue on Computer Vision Meets Databases, 31(3), December 2006.
[21] W. Zhao, J. Long, E. Zhu, and Y. Liu. A Scalable Algorithm for Graph-Based Active Learning. In
Frontiers in Algorithmics, 2008.
[22] R. Segal, T. Markowitz, and W. Arnold. Fast Uncertainty Sampling for Labeling Large E-mail Corpora.
In Conference on Email and Anti-Spam, 2006.
[23] I. Tsang, J. Kwok, and P.-M. Cheung. Core Vector Machines: Fast SVM Training on Very Large Data
Sets. Journal of Machine Learning Research, 6:363?392, 2005.
[24] P. Indyk and N. Thaper. Fast Image Retrieval via Embeddings. In Intl Wkshp on Stat. and Comp. Theories
of Vision, 2003.
[25] M. Goemans and D. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. JACM, 42(6):1115?1145, 1995.
[26] R. Kannan and S. Vempala. Spectral Algorithms. Foundations and Trends in Theoretical Computer
Science, 4(3-4):157?288, 2009.
[27] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, University of
Toronto, 2009.
[28] A. Torralba, R. Fergus, and W. T. Freeman. 80 million Tiny Images: a Large Dataset for Non-Parametric
Object and Scene Recognition. PAMI, 30(11):1958?1970, 2008.
9
| 4088 |@word kulis:1 illustrating:1 version:1 briefly:1 compression:1 norm:5 stronger:5 hu:15 d2:4 seek:1 cos2:3 zelnik:1 decomposition:1 accounting:1 incurs:1 thereby:2 tr:2 reduction:1 initial:1 liu:1 series:1 selecting:1 ours:1 document:3 existing:6 current:5 com:1 yet:4 must:3 realistic:1 subsequent:1 informative:2 additive:1 happen:2 designed:1 plot:1 gist:1 hash:71 v:1 fewer:2 selected:11 item:4 a2d:1 accordingly:1 warmuth:1 steepest:1 core:2 provides:3 toronto:1 hyperplanes:1 five:1 mathematical:1 constructed:1 kvk2:2 supply:1 retrieving:1 focs:1 prove:1 consists:4 overhead:1 fitting:1 introduce:1 expected:3 p1:13 growing:1 inspired:1 salakhutdinov:1 freeman:1 little:2 xti:1 actual:1 provided:4 bounded:1 prateek:1 substantially:3 minimizes:1 developed:1 finding:2 impractical:2 guarantee:9 safely:1 mitigate:1 exactly:1 grauman:3 classifier:12 brute:1 unit:7 farthest:1 whatever:1 appear:2 producing:1 before:1 aat:3 treat:1 tation:1 meet:1 approximately:1 pami:2 twice:1 au:1 examined:1 co:6 limited:1 perpendicular:2 averaged:2 practical:5 practice:2 union:1 differs:1 drug:1 adapting:1 significantly:3 sudheendra:1 projection:4 magen:1 get:1 onto:1 unlabeled:15 selection:36 close:4 vijayanarasimhan:1 applying:2 context:1 live:2 equivalent:1 map:6 starting:1 independently:3 survey:2 formulate:1 sigir:1 insight:1 pk2:1 retrieve:6 embedding:14 handle:2 searching:2 coordinate:1 variation:1 target:2 today:1 deploy:1 massive:6 exact:3 programming:1 origin:1 element:5 trend:2 satisfying:2 expensive:6 recognition:1 asymmetric:1 cut:1 database:33 labeled:5 narrowing:1 hv:5 tsang:1 trade:2 valuable:1 moderately:1 cristianini:1 exhaustively:4 shakhnarovich:1 upon:1 efficiency:2 learner:3 basis:1 darpa:1 various:1 represented:1 retrieves:2 train:2 jain:1 fast:6 effective:1 describe:1 query:35 labeling:11 choosing:1 exhaustive:18 whose:1 heuristic:1 widely:2 supplementary:5 larger:1 otherwise:2 gi:1 g1:1 schohn:1 indyk:4 online:2 seemingly:1 advantage:2 propose:3 relevant:1 loop:1 rapidly:2 intuitive:1 kv:2 motwani:1 requirement:2 darrell:1 intl:2 guaranteeing:1 object:2 help:2 wider:1 illustrate:1 depending:1 stat:1 pose:1 nearest:17 a22:1 p2:10 c:2 implies:2 radius:3 annotated:1 attribute:1 human:1 settle:1 material:3 noticeably:1 require:3 hassner:1 fix:1 kristen:1 alleviate:1 randomization:1 frontier:1 hold:1 proximity:2 recap:1 considered:3 ground:1 normal:5 mapping:1 algorithmic:1 claim:2 major:1 vary:1 torralba:2 a2:2 purpose:2 estimation:2 applicable:1 bag:1 label:11 uhlmann:1 utexas:2 sensitive:15 correctness:1 tool:1 reflects:4 offs:1 clearly:1 gaussian:2 manor:1 rather:1 avoid:1 finkel:1 derived:1 improvement:6 rank:1 contrast:6 baseline:4 sin2:3 dependent:5 nn:1 kernelized:1 koller:1 interested:1 issue:2 classification:2 overall:1 among:2 plan:1 spatial:1 noun:1 initialize:1 special:1 construct:1 sampling:11 manually:1 nearly:3 future:1 minimized:1 report:1 markowitz:1 inherent:1 bangalore:1 few:1 randomly:4 preserve:1 intended:3 geometry:1 replacement:1 ourselves:1 microsoft:2 attempt:1 interest:1 investigate:1 truly:1 semidefinite:1 accurate:3 necessary:1 indexed:2 iv:2 euclidean:11 tree:1 desired:2 plotted:2 goh:2 theoretical:2 minimal:1 instance:8 cost:2 subset:3 krizhevsky:1 rounding:1 johnson:1 scanning:2 varies:1 combined:1 st:1 international:5 randomized:4 sensitivity:1 off:1 pool:15 together:2 central:1 reflect:1 manage:1 choose:2 compu:1 conf:1 zhao:1 leading:1 forgoing:1 return:3 actively:1 potential:2 segal:1 sec:10 wk:1 includes:1 waste:1 gionis:1 satisfy:1 ad:1 vi:1 later:1 break:1 tiv:4 doing:1 capability:1 panda:1 annotation:1 defer:1 contribution:1 minimize:2 formed:1 publicly:1 accuracy:9 descriptor:1 efficiently:3 listing:1 yield:1 identify:2 bbt:1 thaper:1 comp:1 researcher:2 explain:1 proccedings:3 suffers:2 email:1 definition:4 inexpensive:1 against:1 proof:4 hamming:2 sampled:4 gain:2 dataset:4 recall:1 color:1 ut:9 dimensionality:5 improves:1 knowledge:1 satisfiability:1 campbell:1 hashing:21 wei:1 improved:1 formulation:2 eia:1 though:2 generality:1 furthermore:1 stage:2 pk1:1 smola:1 until:1 hand:3 web:2 cohn:1 lack:2 canned:1 defines:1 quality:2 bentley:1 effect:1 contain:1 true:1 concept:1 hence:5 semantic:1 round:1 during:2 kyk2:1 criterion:7 demonstrate:5 confusion:1 passive:2 image:17 meaning:1 novel:5 recently:1 specialized:1 behaves:1 empirically:1 qp:1 defeat:1 million:7 jl:2 he:7 discussed:1 belong:1 volume:1 significant:1 vec:2 ai:1 rd:8 similarly:4 lsh:13 specification:1 similarity:5 behaving:1 base:1 closest:2 retrieved:3 irrelevant:1 inf:1 scenario:1 store:1 binary:2 devise:2 minimum:1 greater:1 somewhat:1 maximize:1 multiple:1 reduces:2 exceeds:1 technical:1 match:1 offer:4 long:1 retrieval:9 cifar:7 wkshp:1 permitting:1 a1:2 impact:1 prediction:3 involving:1 scalable:3 liao:1 vision:4 metric:1 iteration:6 kernel:3 justified:1 background:2 want:1 whereas:2 ertekin:1 ratsch:1 collides:1 unlike:2 pass:1 file:2 searchable:1 december:1 call:1 near:9 ideal:1 split:1 embeddings:6 variety:1 affect:1 variate:1 newsgroups:6 reduce:4 tradeoff:2 airplane:5 luce:1 texas:2 handled:2 effort:1 returned:2 lasvm:1 nine:1 useful:3 generally:2 collision:2 informally:1 detailed:1 svms:1 category:6 mathieson:1 outperform:3 problematic:1 nsf:1 sign:8 fielding:1 per:7 group:1 key:4 four:1 boxplots:1 graph:1 sum:1 run:4 angle:9 parameterized:2 letter:1 uncertainty:1 soda:1 family:9 almost:1 decision:2 scaling:1 bit:11 bound:9 layer:1 oracle:1 constrain:1 scene:1 software:1 vempala:1 relatively:1 department:2 charikar:1 according:2 request:4 across:3 slightly:1 smaller:1 making:4 restricted:1 pr:14 iccv:2 bucket:3 taken:2 previously:1 turn:2 hh:17 needed:2 know:1 prajain:1 end:1 available:2 incurring:1 permit:1 apply:4 observe:2 kwok:1 away:1 spectral:2 appending:1 original:2 denotes:2 clustering:1 remaining:1 assumes:1 krauthgamer:1 graphical:1 freidman:1 exploit:1 especially:1 strategy:3 ofwords:1 parametric:1 traditional:3 cycling:1 september:2 subspace:8 distance:26 mapped:1 separating:1 sci:1 seven:1 mail:1 considers:1 kannan:1 assuming:1 index:1 mini:1 downsampling:1 minimizing:4 difficult:1 unfortunately:1 mostly:1 stoc:1 expense:3 design:1 implementation:1 perform:4 datasets:8 anti:1 viola:1 hinton:1 paradox:1 ww:1 arbitrary:1 introduced:2 pair:1 required:2 conflict:1 algorithmics:1 nip:1 address:1 summarize:1 including:2 vi2:1 critical:1 suitable:1 difficulty:2 force:1 eh:24 zhu:1 scheme:5 inversely:1 identifies:1 carried:1 text:1 review:2 literature:2 vectorizing:1 discovery:1 relative:1 wisconsin:1 embedded:1 loss:2 sublinear:1 proportional:1 proven:1 foundation:2 affine:2 storing:1 pi:2 tiny:10 bordes:1 austin:2 gl:1 supported:1 english:1 bias:1 india:1 neighbor:14 fall:1 taking:1 arnold:1 sparse:1 benefit:1 boundary:2 dimension:5 xn:2 lindenstrauss:1 crawl:1 curve:4 computes:2 collection:2 made:1 preprocessing:5 spam:1 nguyen:1 far:1 cope:1 transaction:2 approximate:16 compact:2 basri:2 active:42 reveals:1 corpus:1 xi:4 fergus:2 alternatively:1 search:24 table:5 reality:1 improving:1 williamson:1 automobile:2 complex:1 cssg:1 bottou:1 domain:1 assured:1 main:1 x1:2 fig:3 tong:2 embeds:2 sub:7 a21:1 concatenating:2 comput:1 candidate:2 tied:1 lemmen:1 jmlr:1 hw:2 down:1 theorem:1 specific:2 xt:4 explored:1 r2:2 svm:7 auroc:4 a3:1 intrinsic:1 workshop:1 quantization:1 andoni:2 effectively:1 magnitude:4 margin:6 locality:12 logarithmic:1 simply:2 likely:1 explore:1 jacm:1 chang:3 truth:1 relies:2 acm:1 weston:1 goal:5 sized:1 viewed:1 cheung:1 content:1 specifically:2 reducing:2 hyperplane:43 wt:4 principal:1 lemma:5 total:3 called:1 multimedia:2 goemans:1 attempted:1 newsgroup:1 indicating:1 formally:1 select:3 searched:1 support:6 latter:1 scan:3 treebased:1 dissimilar:1 relevance:1 chem:1 audio:1 tested:1 ex:6 |
3,411 | 4,089 | Learning to combine foveal glimpses with a
third-order Boltzmann machine
Hugo Larochelle and Geoffrey Hinton
Department of Computer Science, University of Toronto
6 King?s College Rd, Toronto, ON, Canada, M5S 3G4
{larocheh,hinton}@cs.toronto.edu
Abstract
We describe a model based on a Boltzmann machine with third-order connections
that can learn how to accumulate information about a shape over several fixations.
The model uses a retina that only has enough high resolution pixels to cover a
small area of the image, so it must decide on a sequence of fixations and it must
combine the ?glimpse? at each fixation with the location of the fixation before
integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification
datasets, showing that it can perform at least as well as a model trained on whole
images.
1
Introduction
Like insects with unmovable compound eyes, most current computer vision systems use images of
uniform resolution. Human vision, by contrast, uses a retina in which the resolution falls off rapidly
with eccentricity and it relies on intelligent, top-down strategies for sequentially fixating parts of the
optic array that are relevant for the task at hand. This ?fixation point strategy? has many advantages:
? It allows the human visual system to achieve invariance to large scale translations by simply
translating all the fixation points.
? It allows a reduction in the number of ?pixels? that must be processed in parallel yet preserves the ability to see very fine details when necessary. This reduction allows the visual
system to apply highly parallel processing to the sensory input produced by each fixation.
? It removes most of the force from the main argument against generative models of perception, which is that they waste time computing detailed explanations for parts of the image
that are irrelevant to the task at hand. If task-specific considerations are used to select fixation points for a variable resolution retina, most of the irrelevant parts of the optic array
will only ever be represented in a small number of large pixels.
If a system with billions of neurons at its disposal has adopted this strategy, the use of a variable
resolution retina and a sequence of intelligently selected fixation points is likely to be even more
advantageous for simulated visual systems that have to make do with a few thousand ?neurons?. In
this paper we explore the computational issues that arise when the fixation point strategy is incorporated in a Boltzmann machine and demonstrate a small system that can make good use of a variable
resolution retina containing very few pixels. There are two main computational issues:
? What-where combination: How can eye positions be combined with the features extracted
from the retinal input (glimpses) to allow evidence for a shape to be accumulated across a
sequence of fixations?
? Where to look next: Given the results of the current and previous fixations, where should
the system look next to optimize its object recognition performance?
1
Retinal transformation
(reconstruction from xk )
Periphery (low-resolution)
Retinal transformation
(reconstruction from xk )
Periphery (low-resolution)
Image I
Image I
z(i2 , j2 )
i2 , j2
i1 , j1
z(i3 , j3 )
i3 , j3
x2
x2
y
x3
x3
Fovea (high-resolution)
x1
Fovea (high-resolution)
A
h
z(i1x
, j11 )
x1
x3
x2
C
B
Figure 1: A: Illustration of the retinal transformation r(I, (i, j)). The center dot marks the pixel
at position (i, j) (pixels are drawn as dotted squares). B: examples of glimpses computed by the
retinal transformation, at different positions (visualized through reconstructions). C: Illustration of
the multi-fixation RBM.
To tackle these issues, we rely on a special type of restricted Boltzmann machine (RBM) with thirdorder connections between visible units (the glimpses), hidden units (the accumulated features) and
position-dependent units which gate the connections between the visible and hidden units. We
describe approaches for training this model to jointly learn and accumulate useful features from the
image and control where these features should be extracted, and evaluate it on a synthetic dataset
and two image classification datasets.
2
Vision as a sequential process with retinal fixations
Throughout this work, we will assume the following problem framework. We are given a training
t
set of image and label pairs {(It , lt )}N
t=1 and the task is to predict the value of l (e.g. a class label
t
t
l ? {1, . . . , C}) given the associated image I . The standard machine learning approach would
consist in extracting features from the whole image It and from those directly learn to predict lt .
However, since we wish to incorporate the notion of fixation into our problem framework, we need
to introduce some constraints on how information from It is acquired.
To achieve this, we require that information about an image I (removing the superscript t for simplicity) must be acquired sequentially by fixating (or querying) the image at a series of K positions
[(i1 , j1 ), . . . , (iK , jK )]. Given a position (ik , jk ), which identifies a pixel I(ik , jk ) in the image,
information in the neighborhood of that pixel is extracted through what we refer to as a retinal
transformation r(I, (ik , jk )). Much like the fovea of the human retina, this transformation extracts
high-resolution information (i.e. copies the value of the pixels) from the image only in the neighborhood of pixel I(ik , jk ). At the periphery of the retina, lower-resolution information is extracted
by averaging the values of pixels falling in small hexagonal regions of the image. The hexagons
are arranged into a spiral, with the size of the hexagons increasing with the distance from the center
(ik , jk ) of the fixation1 . All of the high-resolution and low-resolution information is then concatenated into a single vector given as output by r(I, (ik , jk )). An illustration of this retinal transformation is given in Figure 1. As a shorthand, we will use xk to refer to the glimpse given by the output
of the retinal transformation r(I, (ik , jk )).
3
A multi-fixation model
We now describe a system that can predict l from a few glimpses x1 , . . . , xK . We know that this
problem is solvable: [1] demonstrated that people can ?see? a shape by combining information
from multiple glimpses through a hole that is much smaller than the whole shape. He called this
?anorthoscopic perception?. The shape information derived from each glimpse cannot just be added
1
A retina with approximately hexagonal pixels produced by a log conformal mapping centered on the current fixation point has an interesting property: It is possible to use weight-sharing over scale and orientation
instead of translation, but we do not explore this here.
2
as implied in [2]. It is the conjunction of the shape of a part and its relative location that provides
evidence for the shape of a whole object, and the natural way to deal with this conjunction is to use
multiplicative interactions between the ?what? and ?where?.
Learning modules that incorporate multiplicative interactions have recently been developed [3, 4].
These can be viewed as energy-based models with three-way interactions. In this work, we build
on [5, 6] who introduced a method of keeping the number of parameters under control when incorporating such high-order interactions in a restricted Boltzmann machine. We start by describing
the standard RBM model for classification, and then describe how we adapt it to the multi-fixation
framework.
3.1
Restricted Boltzmann Machine for classification
RBMs are undirected generative models which model the distribution of a visible vector v of units
using a hidden vector of binary units h. For a classification problem with C classes, the visible layer
is composed of an input vector x and a target vector y, where the target vector follows the so-called
?1 out of C? representation of the classification label l (i.e. y = el where all the components of el
are 0 except for the lth which is 1).
More specifically, given the following energy function:
E(y, x, h) = ?h> Wx ? b> x ? c> h ? d> y ? h> Uy
we define the associated distribution over x, y and h: p(y, x, h) = exp(?E(y, x, h))/Z.
(1)
Assuming x is a binary vector, it can be shown that this model has the following posteriors:
Y
p(h|y, x) =
p(hj |y, x), where p(hj = 1|y, x) = sigm(cj + Uj? y + Wj? x) (2)
j
p(x|h)
=
Y
p(xi |h), where p(xi = 1|h) = sigm(bi + h> W?i )
(3)
i
p(y = el |h)
=
exp(dl + h> U?l )
(4)
exp(dl? + h> U?l? )
where Aj? and A?i respectively refer to the j th row and ith column of matrix A. These posteriors
make it easy to do inference or sample from the model using Gibbs sampling. For real-valued
input vectors, an extension of Equation 1 can be derived to obtain a Gaussian distribution for the
conditional distribution over x of Equation 3 [7].
PC
l? =1
Another useful property of this model is that all hidden units can be marginalized over analytically
in order to exactly compute
P
exp(dl + j softplus(cj + Ujl + Wj? x))
p(y = el |x) = PC
(5)
P
?
?
l? =1 exp(dl +
j softplus(cj + Ujl + Wj? x))
where softplus(a) = log(1 + exp(a)). Hence, classification can be performed for some given input
x by computing Equation 5 and choosing the most likely class.
3.2
Multi-fixation RBM
At first glance, a very simple way of using the classification RBM of the previous section in the
multi-fixation setting would be to set x = x1:K = [x1 , . . . , xK ]. However, doing so would completely throw away the information about the position of the fixations. Instead, we could redefine
the energy function of Equation 1 as follows:
!
K
X
E(y, x1:K , h) =
?h> W(ik ,jk ) xk ? b> xk ? c> h ? d> y ? h> Uy
(6)
k=1
where the connection matrix W(ik ,jk ) now depends on the position of the fixation2 . Such connections are called high-order (here third order) because they can be seen as connecting the hidden
2
To be strictly correct in our notation, we should add the position coordinates (i1 , j1 ), . . . , (iK , jK ) as an
input of the energy function E(y, x, h). To avoid clutter however, we will consider the position coordinates to
be implicitly given by x1 , . . . , xK .
3
units, input units and implicit position units (one for each possible value of positions (ik , jk )). Conditioned on the position units (which are assumed to be given), this model is still an RBM satisfying
the traditional conditional independence properties between the hidden and visible units.
For a given m ? m grid of possible fixation positions, all W(ik ,jk ) matrices contain m2 HR parameters where H is the number of hidden units and R is the size of the retinal transformation. To reduce
that number, we parametrize or factorize the W(ik ,jk ) matrices as follows
W(ik ,jk ) = P diag(z(ik , jk )) F
(7)
where F is R ? D, P is D ? H, z(ik , jk ) is a (learned) vector associated to position (ik , jk ) and
diag(a) is a matrix whose diagonal is the vector a. Hence, W(ik ,jk ) is now an outer product of the
D lower-dimensional bases in F (?filters?) and P (?pooling?), gated by a position specific vector
z(ik , jk ). Instead of learning a separate matrix W(ik ,jk ) for each possible position, we now only
need to learn a separate vector z(ik , jk ) for each position. Intuitively, the vector z(ik , jk ) controls
which rows of F and columns of P are used to accumulate the glimpse at position (ik , jk ) into the
hidden layer of the RBM. A similar factorization has been used by [8]. We emphasize that z(ik , jk )
is not stochastic but is a deterministic function of position (ik , jk ), trained by backpropagation
of gradients from the multi-fixation RBM learning cost. In practice, we force the components of
z(ik , jk ) to be in [0, 1]3 . The multi-fixation RBM is illustrated in Figure 1.
4
Learning in the multi-fixation RBM
The multi-fixation RBM must learn to accumulate useful features from each glimpse, and it must
also learn a good policy for choosing the fixation points. We refer to these two goals as ?learning
the what-where combination? and ?learning where to look?.
4.1
Learning the what-where combination
For now, let?s assume that we are given the sequence of glimpses xt1:K fed to the multi-fixation RBM
for each image It . As suggested by [9], we can train the RBM to minimize the following hybrid cost
over each input xt1:K and label lt :
Hybrid cost:
Chybrid = ? log p(yt |xt1:K ) ? ? log p(yt , xt1:K )
(8)
t
where y = elt . The first term in Chybrid is the discriminative cost and its gradient with respect to
the RBM parameters can be computed exactly, since p(yt |xt1:K ) can be computed exactly (see [9]
for more details on how to derive these gradients) . The second term is the generative cost and its
gradient can only be approximated. Contrastive Divergence [10] based on one full step of Gibbs
sampling provides a good enough approximation. The RBM is then trained by doing stochastic or
mini-batch gradient descent on the hybrid cost.
In [9], it was observed that there is typically a value of ? which yields better performance than
using either discriminative or generative costs alone. Putting more emphasis on the discriminative
term ensures that more capacity is allocated to predicting the label values than to predicting each
pixel value, which is important because there are many more pixels than labels. The generative
term acts as a data-dependent regularizer that encourages the RBM to extract features that capture
the statistical structure of the input. This is a much better regularizer than the domain-independent
priors implemented by L1 or L2 regularization.
We can also take advantage of the following obvious fact: If the sequence xt1:K is associated with a
particular target label yt , then so are all the subsequences xt1:k where k < K. Hence, we can also
train the multi-fixation RBM on these subsequences using the following ?hybrid-sequential? cost:
Hybrid-sequential cost: Chybrid?seq =
K
X
? log p(yt |xt1:k ) ? ? log p(yt , xtk |xt1:k?1 )
(9)
k=1
where the second term, which corresponds to negative log-likelihoods under a so-called conditional
RBM [8], plays a similar role to the generative cost term of the hybrid cost and encourages the
3
?(ik , jk ) vectors
This is done by setting z(ik , jk ) = sigm(?
z(ik , jk )) and learning the unconstrained z
instead. We also use a learning rate 100 times larger for learning those parameters.
4
RBM to learn about the statistical structure of the input glimpses. An estimate of the gradient of
this term can also be obtained using Contrastive Divergence (see [8] for more details). While being
more expensive than the hybrid cost, the hybrid-sequential cost could yield better generalization
performance by better exploiting the training data. Both costs are evaluated in Section 6.1.
4.2
Learning where to look
Now that we have a model for processing the glimpses resulting from fixating at different positions,
we need to define a model which will determine where those fixations should be made on the m ? m
grid of possible positions.
After k ? 1 fixations, this model should take as input some vector sk containing information about
the glimpses accumulated so far (e.g. the current activation probabilities of the multi-fixation RBM
hidden layer), and output a score f (sk , (ik , jk )) for each possible fixation position (ik , jk ). This
score should be predictive of how useful fixating at the given position will be. We refer to this
model as the controller.
Ideally, the fixation position with highest score under the controller should be the one which maximizes the chance of correctly classifying the input image. For instance, a good controller could be
such that
f (sk , (ik , jk )) ? log p(yt |xt1:k?1 , xtk = r(I, (ik , jk )))
(10)
t
i.e. its output is proportional to the log-probability the RBM will assign to the true target y of the
image It once it has fixated at position (ik , jk ) and incorporated the information in that glimpse. In
other words, we would like the controller to assign high scores to fixation positions which are more
likely to provide the RBM with the necessary information to make a correct prediction of yt .
A simple training cost for the controller could then be to reduce the absolute difference between
its prediction f (sk , (ik , jk )) and the observed value of log p(yt |xt1:k?1 , xk = r(I, (ik , jk ))) for
the sequences of glimpses generated while training the multi-fixation RBM. During training, these
sequences of glimpses can be generated from the controller using the Boltzmann distribution
pcontroller ((ik , jk )|xt1:k?1 ) ? exp(f (sk , (ik , jk )))
(11)
which ensures that all fixation positions can be sampled but those which are currently considered
more useful by the controller are also more likely to be chosen. At test time however, for each k, the
position that is the most likely under the controller is chosen4 .
In our experiments, we used a linear model for f (sk , (ik , jk )), with separate weights for each possible value of (ik , jk ). The controller is the same for all k, i.e. f (sk , (ik , jk )) only depends on the
values of sk and (ik , jk ) (though one could consider training a separate controller for each k). A
constant learning rate of 0.001 was used for training. As for the value taken by sk , we set it to
!
!
k?1
k?1
X
X
(ik? ,jk? )
sigm c +
W
xk?
= sigm c +
P diag(z(ik? , jk? )) F xk?
(12)
k? =1
k? =1
which can be seen as an estimate of the probability vector for each hidden unit of the RBM to be 1,
given the previous glimpses x1:k?1 . For the special case k = 1, s1 is computed based on a fixation
at the center of the image but all the information in this initial glimpse is then ?forgotten?, i.e. it is
only used for choosing the first image-dependent fixation point and is not used by the multi-fixation
RBM to accumulate information about the image. We also concatenate to sk a binary vector of size
m2 (one component for each possible fixation position), where a component is 1 if the associated
position has been fixated. Finally, in order to ensure that a fixation position is never sampled twice,
we impose that pcontroller ((ik , jk )|xt1:k?1 ) = 0 for all positions previously sampled.
4.3
Putting it all together
Figure 2 summarizes how the multi-fixation RBM and the controller are jointly trained, for either
the hybrid cost or the hybrid-sequential cost. Details on gradient computations for both costs are
4
While it might not be optimal, this greedy search for the best sequence of fixation positions is simple and
worked well in practice.
5
also given in the supplementary material. To our knowledge, this is the first implemented system
for combining glimpses that jointly trains a recognition component (the RBM) with an attentional
component (the fixation controller).
5
Related work
A vast array of work has been dedicated to modelling the visual search behavior of humans [11, 12,
13, 14], typically through the computation of saliency maps [15, 16]. Most of such work, however,
is concerned with the prediction of salient regions in an image, and not with the other parts of a
task-oriented vision classifier.
Surprisingly little work has been done on how best to combine multiple glimpses in a recognition
system. SIFT features have been proposed either as a prefilter for reducing the number of possible
fixation positions [17] or as a way of preprocessing the raw glimpses [13]. [18] used a fixed and
hand-tuned saliency map to sample small patches in images of hand-written characters and trained
a recursive neural network from sequences of such patches. By contrast, the model proposed here
does not rely on hand-tuned features or saliency maps and learns from scratch both the where to look
and what-where combination components. A further improvement on the aforecited work consists
in separately learning both the where to look and the what-where combination components [19, 20].
In this work however, both components are learned jointly, as opposed to being put together only at
test time. For instance, [19] use a saliency map based on filters previously trained on natural images
for the where to look component, and the what-where combination component for recognition is a
nearest neighbor density estimator. Moreover, their goal is not to avoid fixating everywhere, but to
obtain more robust recognition by using a saliency map (whose computation effectively corresponds
to fixating everywhere in the image). In that respect, our work is orthogonal, as we are treating each
fixation as a costly operation (e.g. we considered up to 6 fixations, while they used 100 fixations).
6
Experiments
We present three experiments on three different image classification problems. The first is based
on the MNIST dataset and is meant to evaluate the multi-fixation RBM alone (i.e. without the controller). The second is on a synthetic dataset and is meant to analyze the controller learning algorithm
and its interaction with the multi-fixation RBM. Finally, results on a facial expression recognition
problem are presented.
6.1
Experiment 1: Evaluation of the multi-fixation RBM
In order to evaluate the multi-fixation RBM of Section 3.2 separately from the controller model, we
trained a multi-fixation RBM5 on a fixed set of 4 fixations (i.e. the same fixation positions for all images). Those fixations were centered around the pixels at positions {(9, 9), (9, 19), (19, 9), (19, 19)}
(MNIST images are of size 28 ? 28) and their order was chosen at random for every parameter update of the RBM. The retinal transformation had a high-resolution fovea covering 38 pixels and 60
hexagonal low-resolution regions in the periphery (see Figure 2 for an illustration). We used the
training, validation and test splits proposed by [21], with a training set of 10 000 examples.
The results are given in Figure 2, with comparisons with an RBF kernel SVM classifier and a single
hidden layer neural network initialized using unsupervised training of an RBM on the training set
(those two baselines were trained on the full MNIST images). The multi-fixation RBM yields performance comparable to the baselines despite only having four glimpses, and the hybrid-sequential
cost function works better than the non-sequential, hybrid cost.
6.2
Experiment 2: evaluation of the controller
In this second experiment, we designed a synthetic problem where the optimal fixation policy is
known, to validate the proposed training algorithm for the controller. The task is to identify whether
5
The RBM used H = 500 hidden units and was trained with a constant learning rate of 0.1 (no momentum
was used). The learned position vectors z(ik , jk ) were of size D = 250. Training lasted for 2000 iterations,
with a validation set used to keep track of generalization performance and remember the best parameter value
of the RBM. We report results when using either the hybrid cost of Equation 8 or the hybrid-sequential cost of
Equation 9, with ? = 0.1. Mini-batches of size 100 were used.
6
Pseudocode for training update
? compute s1 based on center of image
for k from 1 to K do
? sample (ik , jk ) from pcontroller ((ik , jk )|xt1:k?1 )
? compute xk = r(I, (ik , jk ))
? update controller with a gradient step for error
|f (sk , (ik , jk )) ? log p(y|x1:k )|
if using hybrid-sequential cost then
? accumulate gradient on RBM parameters
of kth term in cost Chybrid?seq
end if
? compute sk+1
end for
if using hybrid-sequential cost then
? update RBM parameters based on accumulated
gradient of hybrid-sequential cost Chybrid?seq
else {using hybrid cost}
? update RBM based on gradient of hybrid cost Chybrid
end if
A
Experiment 1: MNIST with 4 fixations
Model
NNet+RBM [22]
SVM [21]
Multi-fixation RBM
(hybrid)
Multi-fixation RBM
(hybrid-sequential)
B
Error
3.17% (? 0.15)
3.03% (? 0.15)
3.20% (? 0.15)
2.76% (? 0.14)
Figure 2: A: Pseudocode for the training update of the multi-fixation RBM, using either the hybrid
or hybrid-sequential cost. B: illustration of glimpses and results for experiment on MNIST.
there is a horizontal (positive class) or vertical (negative class) 3-pixel white bar somewhere near the
edge of a 15 ? 15 pixel image. At the center of the image is one of 8 visual symbols, indicating the
location of the bar. This symbol conveys no information about the class (the positive and negative
classes are equiprobable) but is necessary to identify where to fixate. Figure 3 shows positive and
negative examples. There are only 48 possible images and the model is trained on all of them (i.e.
we are measuring the capacity of the model to learn this problem perfectly). Since, as described
earlier, the input s1 of the controller contains information about the center of the image, only one
fixation decision by the controller suffices to solve this problem.
A multi-fixation RBM was trained jointly with a controller on this problem6 , with only K = 1
fixation. When trained according to the hybrid cost of Equation 8 (? = 1), the model was able to
solve this problem perfectly without errors, i.e. the controller always proposes to fixate at the region
containing the white bar and the multi-fixation RBM always correctly recognizes the orientation
of the bar. However, using only the discriminative cost (? = 0), it is never able to solve it (i.e.
has an error rate of 50%), even if trained twice as long as for ? = 1. This is because the purely
discriminative RBM never learns meaningful features for the non-discriminative visual symbol at
the center, which are essential for the controller to be able to predict the position of the white bar.
6.3
Experiment 3: facial expression recognition experiment
Finally, we applied the multi-fixation RBM with its controller to a problem of facial expression
recognition. The dataset [23] consists in 4178 images of size 100 ? 100, depicting people acting
one of seven facial expressions (anger, disgust, fear, happiness, sadness, surprise and neutral, see
Figure 3 for examples). Five training, validation and test set splits where generated, ensuring that
all images of a given person can only be found in one of the three sets. Pixel values of the images
were scaled to the [?0.5, 0.5] interval.
A multi-fixation RBM learned jointly with a controller was trained on this problem7 , with K = 6
fixations. Possible fixation positions were layed out every 10 pixels on a 7 ? 7 grid, with the top-left
6
Hyper-parameters: H = 500, D = 250. Stochastic gradient descent was used with a learning rate of
0.001. The controller had the choice of 9 possible fixation positions, each covering either one of the eight
regions where bars can be found or the middle region where the visual symbol is. The retinal transformation
was such that information from only one of those regions is transferred.
7
Hyper-parameters: H = 250, D = 250. Stochastic gradient descent was used with a learning rate of 0.01.
The RBM was trained with the hybrid cost of Equation 8 with ? = 0.001 (the hybrid cost was preferred mainly
because it is faster). Also, the matrix P was set to the identity matrix and only F was learned (this removed a
matrix multiplication and thus accelerated learning in the model, while still giving good results). The vectors
7
Experiment 2: synthetic dataset
Positive examples
Negative examples
Experiment 3: facial expression recognition dataset
Examples
Results
0.65
Accuracy
0.6
0.55
Multi!fixation RBM
0.5
SVM
0.45
0.4
A
B
1
2
3
4
5
6
Number of fixations
Figure 3: A: positive and negative from the synthetic dataset of experiment 2. B: examples and
results for the facial expression recognition dataset.
position being at pixel (20, 20). The retinal transformation covered around 2000 pixels and didn?t
use a periphery8 (all pixels were from the fovea). Moreover, glimpses were passed through a ?preprocessing? hidden layer of size 250, initialized by unsupervised training of an RBM with Gaussian
visible units (but without target units) on glimpses from the 7 ? 7 grid. During training of the multifixation RBM, the discriminative part of its gradient was also passed through the preprocessing
hidden layer for fine-tuning of its parameters.
Results are reported in Figure 3, where the multi-fixation RBM is compared to an RBF kernel SVM
trained on the full images. The accuracy of the RBM is given after a varying number of fixations.
We can see that after 3 fixations (i.e. around 60% of the image) the multi-fixation RBM reaches
a performance that is statistically equivalent to that of the SVM (58.2 ? 1.5%) trained on the full
images. Training the SVM on a scaled-down version of the data (48 ? 48 pixels) gives a similar
performance of 57.8% (?1.5%). At 5 fixations, the multi-fixation RBM now improves on the SVM,
and gets even better at 6 fixations, with an accuracy of 62.7% (?1.5%). Finally, we also computed
the performance of a linear SVM classifier trained on the concatenation of the hidden units from
a unique RBM with Gaussian visible units applied at all 7 ? 7 positions (the same RBM used
for initializing the preprocessing layer of the multi-fixation RBM was used). This convolutional
approach, which requires 49 fixations, yields a performance of 61.2% (?1.5%), slightly worse but
statistically indistinguishable from the multi-fixation RBM which only required 6 fixations.
7
Conclusion
Human vision is a sequential sampling process in which only a fraction of the optic array is ever
processed at the highest resolution. Most computer vision work on object recognition ignores this
fact and can be viewed as modelling tachistoscopic recognition of very small objects that lie entirely
within the fovea. We have focused on the other extreme, i.e. recognizing objects by using multiple
task-specific fixations of a retina with few pixels, and obtained positive results. We believe that the
intelligent choice of fixation points and the integration of multiple glimpses will be essential for
making biologically inspired vision systems work well on large images.
Acknowledgments
We thank Marc?Aurelio Ranzato and the reviewers for many helpful comments, and Josh Susskind
and Tommy Liu for help with the facial expression dataset. This research was supported by NSERC.
References
[1] Hermann von Helmholtz. Treatise on physiological optics. Dover Publications, New York, 1962.
z(i, j) were initialized in a topographic manner (i.e. each component of z(i, j) is 0 only in a small region of
the image). Finally, to avoid overfitting, exponentially decaying averages of the parameters of the model were
maintained throughout training and were used as the values of the model at test time.
8
This simplification of the retinal transformation makes it more convenient to estimate the percentage of
high-resolution pixels used by the multi-fixation RBM and contrast it with the SVM trained on the full image.
8
[2] Arash Fazl, Stephen Grossberg, and Ennio Mingolla. View-invariant object category learning, recognition,
and search: how spatial and object attention are coordinated using surface-based attentional shrouds. Cogn
Psychol, 58(1):1?48, 2009.
[3] Roland Memisevic and Geoffrey E. Hinton. Unsupervised learning of image transformations. In In
Computer Vision and Pattern Recognition. IEEE Computer Society, 2007.
[4] Urs K?oster and Aapo Hyv?arinen. A two-layer ica-like model estimated by score matching. In ICANN?07:
Proceedings of the 17th international conference on Artificial neural networks, pages 798?807, Berlin,
Heidelberg, 2007. Springer-Verlag.
[5] Geoffrey E. Hinton. Learning to represent visual input. Phil. Trans. R. Soc., 365(1537):177?84, 2010.
[6] Roland Memisevic and Geoffrey E. Hinton. Learning to represent spatial transformations with factored
higher-order boltzmann machines. Neural Computation, 22:1473?1492, 2010.
[7] Geoffrey E. Hinton and Ruslan Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, July 2006.
[8] Graham W. Taylor and Geoffrey E. Hinton. Factored conditional restricted boltzmann machines for modeling motion style. In ICML ?09: Proceedings of the 26th Annual International Conference on Machine
Learning, pages 1025?1032, New York, NY, USA, 2009. ACM.
[9] Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann machines.
In ICML ?08: Proceedings of the 25th international conference on Machine learning, pages 536?543,
New York, NY, USA, 2008. ACM.
[10] Geoffrey E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771?1800, 2002.
[11] Rajesh P.N. Rao, Gregory J. Zelinsky, Mary M. Hayhoe, and Dana H. Ballard. Modeling saccadic targeting in visual search. In David S. Touretzky, Michael Mozer, and Michael E. Hasselmo, editors, Advances
in Neural Information Processing Systems 8, pages 830?836. MIT Press, 1996.
[12] Laura Walker Renninger, James M. Coughlan, Preeti Verghese, and Jitendra Malik. An information
maximization model of eye movements. In Lawrence K. Saul, Yair Weiss, and L?eon Bottou, editors,
Advances in Neural Information Processing Systems 17, pages 1121?1128. MIT Press, Cambridge, MA,
2005.
[13] Wei Zhang, Hyejin Yang, Dimitris Samaras, and Gregory Zelinsky. A computational model of eye movements during object class detection. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural
Information Processing Systems 18, pages 1609?1616. MIT Press, Cambridge, MA, 2006.
[14] Antonio Torralba, Monica S. Castelhano, Aude Oliva, and John M. Henderson. Contextual guidance of
eye movements and attention in real-world scenes: the role of global features in object search. Psychological Review, 113:2006, 2006.
[15] Laurent Itti, Christof Koch, and Ernst Niebur. A model of saliency-based visual attention for rapid scene
analysis. IEEE Trans. Pattern Anal. Mach. Intell., 20(11):1254?1259, 1998.
[16] Laurent Itti and Christof Koch. Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3):194?203, 2001.
[17] Lucas Paletta, Gerald Fritz, and Christin Seifert. Q-learning of sequential attention for visual object
recognition from informative local descriptors. In ICML ?05: Proceedings of the 22nd international
conference on Machine learning, pages 649?656, New York, NY, USA, 2005. ACM.
[18] Ethem Alpaydin. Selective attention for handwritten digit recognition. In David S. Touretzky, Michael
Mozer, and Michael E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages
771?777. MIT Press, 1996.
[19] Christopher Kanan and Garrison Cottrell. Robust classification of objects, faces, and flowers using natural
image statistics. In CVPR, 2010.
[20] Stephen Gould, Joakim Arfvidsson, Adrian Kaehler, Benjamin Sapp, Marius Messner, Gary Bradski, Paul
Baumstarck, Sukwon Chung, and Andrew Y. Ng. Peripheral-foveal vision for real-time object recognition
and tracking in video. In In International Joint Conference on Artificial Intelligence (IJCAI, 2007.
[21] Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An empirical
evaluation of deep architectures on problems with many factors of variation. In ICML ?07: Proceedings
of the 24th international conference on Machine learning, pages 473?480, New York, NY, USA, 2007.
ACM.
[22] Hugo Larochelle, Yoshua Bengio, Jerome Louradour, and Pascal Lamblin. Exploring strategies for training deep neural networks. Journal of Machine Learning Research, 10:1?40, 2009.
[23] Josh M. Susskind, Adam K. Anderson, and Geoffrey E. Hinton. The toronto face database. Technical
Report UTML TR 2010-001, Dept. of Computer Science, University of Toronto, 2010.
9
| 4089 |@word version:1 middle:1 advantageous:1 nd:1 adrian:1 hyv:1 contrastive:3 tr:1 reduction:2 initial:1 liu:1 foveal:2 series:1 score:5 contains:1 tuned:2 current:4 contextual:1 activation:1 yet:1 must:6 written:1 john:1 cottrell:1 visible:7 concatenate:1 j1:3 wx:1 shape:7 informative:1 utml:1 remove:1 treating:1 designed:1 update:6 alone:2 generative:6 selected:1 greedy:1 intelligence:1 xk:12 ith:1 dover:1 coughlan:1 provides:2 toronto:5 location:3 zhang:1 five:1 ik:51 shorthand:1 fixation:88 consists:2 combine:3 redefine:1 tommy:1 manner:1 introduce:1 g4:1 acquired:2 ica:1 rapid:1 behavior:1 multi:35 inspired:1 salakhutdinov:1 little:1 increasing:1 notation:1 moreover:2 maximizes:1 didn:1 what:8 developed:1 transformation:15 forgotten:1 remember:1 every:2 act:1 tackle:1 exactly:3 classifier:3 scaled:2 platt:1 control:3 unit:19 christof:2 before:1 positive:6 local:1 despite:1 mach:1 laurent:2 approximately:1 might:1 emphasis:1 twice:2 sadness:1 factorization:1 bi:1 statistically:2 uy:2 unique:1 acknowledgment:1 grossberg:1 practice:2 recursive:1 x3:3 backpropagation:1 susskind:2 cogn:1 digit:1 area:1 empirical:1 convenient:1 matching:1 word:1 integrating:1 get:1 cannot:1 targeting:1 put:1 preeti:1 optimize:1 equivalent:1 deterministic:1 demonstrated:1 center:7 yt:9 map:5 reviewer:1 attention:6 phil:1 focused:1 resolution:18 renninger:1 simplicity:1 m2:2 estimator:1 factored:2 array:4 lamblin:1 notion:1 coordinate:2 variation:1 target:5 play:1 us:2 helmholtz:1 recognition:17 jk:51 satisfying:1 approximated:1 expensive:1 database:1 observed:2 role:2 module:1 initializing:1 capture:1 thousand:1 region:8 wj:3 ensures:2 i1x:1 ranzato:1 movement:3 highest:2 removed:1 alpaydin:1 mozer:2 benjamin:1 ideally:1 thirdorder:1 gerald:1 trained:19 predictive:1 purely:1 samara:1 completely:1 joint:1 represented:1 regularizer:2 sigm:5 train:3 describe:4 artificial:2 hyper:2 neighborhood:2 choosing:3 whose:2 larger:1 valued:1 supplementary:1 solve:3 cvpr:1 ability:1 statistic:1 topographic:1 jointly:6 superscript:1 sequence:9 advantage:2 intelligently:1 reconstruction:3 interaction:5 product:2 j2:2 relevant:1 combining:2 rapidly:1 ernst:1 achieve:2 validate:1 olkopf:1 billion:1 exploiting:1 ijcai:1 eccentricity:1 adam:1 object:13 help:1 derive:1 andrew:1 nearest:1 throw:1 implemented:2 c:1 soc:1 larochelle:4 hermann:1 correct:2 filter:2 stochastic:4 arash:1 centered:2 human:5 translating:1 material:1 require:1 arinen:1 assign:2 suffices:1 generalization:2 extension:1 strictly:1 exploring:1 around:3 considered:2 koch:2 exp:7 lawrence:1 mapping:1 predict:4 ennio:1 torralba:1 ruslan:1 label:7 currently:1 hasselmo:2 mit:4 ujl:2 gaussian:3 always:2 i3:2 avoid:3 hj:2 varying:1 publication:1 conjunction:2 derived:2 improvement:1 verghese:1 modelling:3 likelihood:1 mainly:1 lasted:1 contrast:3 baseline:2 helpful:1 inference:1 dependent:3 el:4 accumulated:4 typically:2 hidden:15 seifert:1 selective:1 i1:3 pixel:26 issue:3 classification:11 orientation:2 pascal:1 insect:1 lucas:1 proposes:1 spatial:2 special:2 integration:1 once:1 never:3 having:1 ng:1 sampling:3 look:7 unsupervised:3 icml:4 anger:1 report:2 yoshua:3 intelligent:2 few:4 retina:9 equiprobable:1 oriented:1 composed:1 preserve:1 divergence:3 intell:1 detection:1 bradski:1 highly:1 nnet:1 evaluation:3 henderson:1 extreme:1 pc:2 rajesh:1 edge:1 necessary:3 glimpse:29 facial:7 orthogonal:1 taylor:1 initialized:3 guidance:1 psychological:1 instance:2 column:2 earlier:1 modeling:2 rao:1 cover:1 measuring:1 maximization:1 cost:33 happiness:1 neutral:1 uniform:1 recognizing:1 reported:1 gregory:2 synthetic:6 combined:1 person:1 density:1 international:6 fritz:1 memisevic:2 off:1 michael:4 connecting:1 together:2 monica:1 von:1 containing:3 opposed:1 zelinsky:2 castelhano:1 worse:1 expert:1 laura:1 style:1 itti:2 chung:1 fixating:6 retinal:14 bergstra:1 waste:1 coordinated:1 jitendra:1 depends:2 multiplicative:2 performed:1 view:1 doing:2 analyze:1 start:1 decaying:1 parallel:2 minimize:1 square:1 accuracy:3 convolutional:1 descriptor:1 who:1 christin:1 yield:4 saliency:6 identify:2 raw:1 handwritten:1 produced:2 niebur:1 m5s:1 reach:1 touretzky:2 sharing:1 against:1 energy:4 rbms:1 james:2 obvious:1 conveys:1 fixate:2 associated:5 rbm:61 sampled:3 dataset:10 knowledge:1 improves:1 dimensionality:1 cj:3 sapp:1 disposal:1 higher:1 wei:3 arranged:1 done:2 evaluated:1 though:1 anderson:1 just:1 implicit:1 jerome:1 hand:5 horizontal:1 christopher:1 glance:1 aj:1 believe:1 aude:1 mary:1 usa:4 contain:1 true:1 analytically:1 hence:3 regularization:1 elt:1 i2:2 illustrated:1 deal:1 white:3 indistinguishable:1 during:3 encourages:2 covering:2 maintained:1 demonstrate:1 l1:1 dedicated:1 motion:1 image:47 consideration:1 prefilter:1 recently:1 pseudocode:2 hugo:4 exponentially:1 he:1 accumulate:6 refer:5 cambridge:2 gibbs:2 rd:1 unconstrained:1 grid:4 tuning:1 had:2 dot:1 treatise:1 surface:1 add:1 base:1 joakim:1 posterior:2 irrelevant:2 periphery:4 compound:1 verlag:1 binary:3 seen:2 impose:1 determine:1 july:1 stephen:2 multiple:4 full:5 technical:1 faster:1 adapt:1 long:1 roland:2 ensuring:1 j3:2 prediction:3 aapo:1 oliva:1 controller:26 vision:9 iteration:1 kernel:2 represent:2 fine:2 separately:2 interval:1 else:1 walker:1 allocated:1 sch:1 comment:1 pooling:1 j11:1 undirected:1 extracting:1 near:1 yang:1 split:2 enough:2 spiral:1 easy:1 concerned:1 independence:1 bengio:3 architecture:1 perfectly:2 reduce:2 whether:1 expression:7 passed:2 mingolla:1 york:5 antonio:1 deep:2 useful:5 detailed:1 covered:1 clutter:1 processed:2 visualized:1 category:1 percentage:1 dotted:1 estimated:1 neuroscience:1 correctly:2 track:1 putting:2 salient:1 four:1 kanan:1 falling:1 drawn:1 vast:1 fraction:1 everywhere:2 disgust:1 throughout:2 decide:1 seq:3 patch:2 decision:1 summarizes:1 comparable:1 graham:1 entirely:1 hexagon:2 layer:8 simplification:1 courville:1 annual:1 optic:4 constraint:1 worked:1 x2:3 scene:2 argument:1 hexagonal:3 xtk:2 gould:1 marius:1 transferred:1 department:1 according:1 peripheral:1 combination:6 across:1 smaller:1 slightly:1 character:1 ur:1 making:1 s1:3 biologically:1 intuitively:1 restricted:5 invariant:1 taken:1 equation:8 previously:2 describing:1 know:1 fed:1 conformal:1 end:3 adopted:1 parametrize:1 operation:1 apply:1 eight:1 away:1 batch:2 yair:1 gate:1 top:2 ensure:1 baumstarck:1 recognizes:1 marginalized:1 somewhere:1 giving:1 concatenated:1 eon:1 build:1 uj:1 society:1 implied:1 malik:1 added:1 strategy:5 costly:1 saccadic:1 traditional:1 diagonal:1 gradient:14 kth:1 fovea:6 distance:1 separate:4 attentional:2 simulated:1 capacity:2 concatenation:1 outer:1 thank:1 berlin:1 seven:1 evaluate:4 assuming:1 illustration:5 mini:2 minimizing:1 negative:6 anal:1 boltzmann:10 policy:2 perform:1 gated:1 vertical:1 neuron:2 datasets:2 descent:3 hinton:9 ever:2 incorporated:2 canada:1 introduced:1 david:2 pair:1 required:1 connection:5 learned:5 trans:2 able:3 suggested:1 bar:6 hayhoe:1 perception:2 pattern:2 dimitris:1 flower:1 explanation:1 video:1 natural:3 force:2 rely:2 hybrid:26 solvable:1 hr:1 predicting:2 eye:5 identifies:1 psychol:1 extract:2 oster:1 prior:1 review:2 l2:1 multiplication:1 relative:1 interesting:1 proportional:1 querying:1 geoffrey:8 dana:1 larocheh:1 validation:3 editor:4 classifying:1 translation:2 row:2 surprisingly:1 supported:1 copy:1 keeping:1 allow:1 fall:1 neighbor:1 saul:1 face:2 absolute:1 world:1 sensory:1 ignores:1 made:1 preprocessing:4 far:1 erhan:1 emphasize:1 implicitly:1 preferred:1 keep:1 global:1 sequentially:2 overfitting:1 fixated:2 xt1:14 assumed:1 xi:2 factorize:1 discriminative:8 subsequence:2 search:5 sk:12 learn:8 ballard:1 robust:2 nature:1 depicting:1 heidelberg:1 bottou:1 domain:1 diag:3 marc:1 louradour:1 icann:1 main:2 aurelio:1 whole:4 arise:1 paul:1 x1:9 paletta:1 ny:4 garrison:1 position:43 momentum:1 wish:1 lie:1 third:3 learns:2 removing:1 down:2 dumitru:1 specific:3 showing:1 sift:1 symbol:4 ethem:1 svm:9 physiological:1 evidence:2 dl:4 consist:1 incorporating:1 mnist:5 essential:2 sequential:15 effectively:1 conditioned:1 hole:1 surprise:1 lt:3 simply:1 likely:5 explore:2 visual:12 josh:2 nserc:1 tracking:1 fear:1 springer:1 corresponds:2 gary:1 chance:1 relies:1 extracted:4 acm:4 ma:2 conditional:4 lth:1 viewed:2 goal:2 king:1 identity:1 rbf:2 specifically:1 except:1 reducing:2 averaging:1 acting:1 called:4 invariance:1 meaningful:1 indicating:1 select:1 college:1 aaron:1 mark:1 people:2 softplus:3 meant:2 accelerated:1 incorporate:2 dept:1 scratch:1 |
3,412 | 4,090 | Interval Estimation for Reinforcement-Learning
Algorithms in Continuous-State Domains
Adam White
Department of Computing Science
University of Alberta
[email protected]
Martha White
Department of Computing Science
University of Alberta
[email protected]
Abstract
The reinforcement learning community has explored many approaches to obtaining value estimates and models to guide decision making; these approaches, however, do not usually provide a measure of confidence in the estimate. Accurate
estimates of an agent?s confidence are useful for many applications, such as biasing exploration and automatically adjusting parameters to reduce dependence
on parameter-tuning. Computing confidence intervals on reinforcement learning
value estimates, however, is challenging because data generated by the agentenvironment interaction rarely satisfies traditional assumptions. Samples of valueestimates are dependent, likely non-normally distributed and often limited, particularly in early learning when confidence estimates are pivotal. In this work, we
investigate how to compute robust confidences for value estimates in continuous
Markov decision processes. We illustrate how to use bootstrapping to compute
confidence intervals online under a changing policy (previously not possible) and
prove validity under a few reasonable assumptions. We demonstrate the applicability of our confidence estimation algorithms with experiments on exploration,
parameter estimation and tracking.
1
Introduction
In reinforcement learning, an agent interacts with the environment, learning through trial-and-error
based on scalar reward signals. Many reinforcement learning algorithms estimate values for states to
enable selection of maximally rewarding actions. Obtaining confidence intervals on these estimates
has been shown to be useful in practice, including directing exploration [17, 19] and deciding when
to exploit learned models of the environment [3]. Moreover, there are several potential applications
using confidence estimates, such as teaching interactive agents (using confidence estimates as feedback), adjusting behaviour in non-stationary environments and controlling behaviour in a parallel
multi-task reinforcement learning setting.
Computing confidence intervals was first studied by Kaelbling for finite-state Markov decision processes (MDPs) [11]. Since this preliminary work, many model-based algorithms have been proposed
for evaluating confidences for discrete-state MDPs. The extension to continuous-state spaces with
model-free learning algorithms, however, has yet to be undertaken. In this work we focus on constructing confidence intervals for online model-free reinforcement learning agents.
The agent-environment interaction in reinforcement learning does not satisfy classical assumptions
typically used for computing confidence intervals, making accurate confidence estimation challenging. In the discrete case, certain simplifying assumptions make classical normal intervals more
appropriate; in the continuous setting, we will need a different approach.
The main contribution of this work is a method to robustly construct confidence intervals for approximated value functions in continuous-state reinforcement learning setting. We first describe boot1
strapping, a non-parametric approach to estimating confidence intervals from data. We then prove
that bootstrapping can be applied to our setting, addressing challenges due to sample dependencies,
changing policies and non-stationarity (because of learning). Then, we discuss how to address complications in computing confidence intervals for sparse or local linear representations, common in
reinforcement learning, such as tile coding, radial basis functions, tree-based representations and
sparse distributed memories. Finally, we propose several potential applications of confidence intervals in reinforcement learning and conclude with an empirical investigation of the practicality of our
confidence estimation algorithm for exploration, tuning the temporal credit parameter and tracking.
2
Related Work
Kaelbling was the first to employ confidence interval estimation method for exploration in finitestate MDPs [11]. The agent estimates the probability of receiving a reward of 1.0 for a given stateaction pair and constructs an upper confidence bound on this estimate using a Bernoulli confidence
interval. Exploration is directed by selecting the action with the highest upper confidence bound,
which corresponds to actions for which it has high uncertainty or high value estimates [11].
Interval estimation for model-based reinforcement learning with discrete state spaces has been quite
extensively studied. Mannor et al. (2004) investigated confidence estimates for the parameters of the
learned transition and reward models, assuming Gaussian rewards [5, 16]. The Model Based Interval
Estimation Algorithm (MBIE) uses upper confidence bounds on the model transition probabilities to
select the model that gives the maximal reward [22]. The Rmax algorithm uses a heuristic notion of
confidence (state visitiation counts) to determine when to explore, or exploit the learned model [3].
Both Rmax and MBIE are guaranteed to converge to the optimal policy in polynomially many steps.
These guarantees, however, become difficult for continuous state spaces.
A recently proposed framework, KWIK (?Knows What It Knows?), is a formal framework for algorithms that explore efficiently by minimizing the number of times an agent must return the response
?I do not know? [23]. For example, for reinforcement learning domains, KWIK-RMAX biases exploration toward states that the algorithm currently does not ?know? an accurate estimate of the
value [23]. KWIK-RMAX provides an uncertainty estimate (not a confidence interval) on a linear
model by evaluating if the current feature vector is contained in the span of previously observed
feature vectors. Though quite general, the algorithm remains theoretical due to the requirement of a
solution to the model.
Bayesian methods (e.g., GPTD [6]) provide a natural measure of confidence: one can use the posterior distribution to form credible intervals for the mean value of a state-action pair. However, if one
wants to use non-Gaussian priors and likelihoods, then the Bayesian approach is intractable without
appropriate approximations. Although this approach is promising, we are interested in computing
classical frequentist confidence intervals for agents, while not restricting the underlying learning
algorithm to use a model or particular update mechanism.
Several papers have demonstrated the empirical benefits of using heuristic confidence estimates to
bias exploration [14, 17, 19] and guide data collection in model learning [9, 18]. For example, Nouri
et al. [19] discretize the state space with a KD-tree and mark the state as ?known? after reaching a
visitation count threshold.
In the remainder of this work, we provide the first study of estimating confidence intervals for
model-free, online reinforcement learning value estimates in the continuous-state setting.
3
Background
In this section, we will introduce the reinforcement learning model of sequential decision making
and bootstrapping, a family of techniques used to compute confidence intervals for means of dependent data from an unknown (likely non-normal) underlying distribution.
3.1
Reinforcement Learning
In reinforcement learning, an agent interacts with its environment, receiving observations and selecting actions to maximize a scalar reward signal provided by the environment. This interaction is
2
usually modeled by a Markov decision process (MDP). An MDP consists of (S, A, P, R) where S is
the set of states; A is a finite set of actions; P , the transition function, which describes the probability
of reaching a state s0 from a given state and action (s, a); and finally the reward function R(s, a, s0 ),
which returns a scalar value for transitioning from state-action (s, a) to state s0 . The state of the
environment is said to be Markov if P r(st+1 , rt+1 |st , at ) = P r(st+1 , rt+1 |st , at , . . . , s0 , a0 ). The
agent?s objective is to learn a policy, ? : S ? A, such that R is maximized for all s ? S.
Many reinforcement learning algorithms maintain an state-action value function, Q? (s, a), equal
to the
expected discounted sum of future rewards for a given state-action pair: Q? (s, a) =
P?
k
E?
k=0 ? rt+k+1 |st = s, at = a , where ? ? [0 1] discounts the contribution of future rewards.
The optimal state-action value function, Q? (s, a), is the maximum achievable value given the agent
starts in state s and selects action a. The optimal policy, ? ? , is greedy with respect to the optimal value function: ? ? (s) = argmaxa?A Q? (s, a) for all s ? S. During learning the agent must
? a)) or selecting actions to gain
balance selecting actions to achieve high reward (according to Q(s,
more information about the environment. This is called the exploration-exploitation trade-off.
In many practical applications, the state space is too large to store in a table. In this case, a function
approximator is used to estimate the value of a state-action pair. A linear function approximator
? a) = ? T ?(s, a). We
produces a value prediction using a linear combination of basis units: Q(s,
refer the reader to the introductory text [25] for a more detailed discussion on reinforcement learning.
3.2
Bootstrapping a confidence interval for dependent data
Bootstrapping is a statistical procedure for estimating the distribution of a statistic (such as the
sample mean), particularly when the underlying distribution is complicated or unknown, samples
are dependent and power calculations (e.g. variance) are estimated with limited sample sizes [21].
This estimate can then be used to approximate a 1 ? ? confidence interval around the statistic: an
interval for which the probability of seeing the statistic outside of the interval is low (probability ?).
For example, for potentially dependent data sampled from an unknown distribution P (XP
1 , X2 , . . .),
n
we can use bootstrapping to compute a confidence interval around the mean, Tn = n?1 i=1 xn .
The key idea behind bootstrapping is that the data is an appropriate approximation, Pn , of the true
distribution: resampling from the data represents sampling from Pn . Samples are ?drawn? from Pn
to produce a bootstrap sample, x?1 , . . . , x?n ? {x1 , . . . , xn }, and an estimate, Tn? , of the statistic.
?
?
. These, for
This process is repeated B times, giving B samples of the statistic, Tn,1
, . . . , Tn,B
P ?
? 2
example, can be used to estimate VarP (Tn ) ? VarPn (Tn ) = (Tn,b ? T n ) /(B ? 1).
Bootstrapped intervals have been shown to have a lower coverage error than ?
normal intervals for
dependent, non-normal data. A normal interval has a coverage error of O(1/ n), whereas bootstrapping has a coverage error of O(n?3/2 ) [29]. The coverage error represents how quickly the
estimated interval converges to the true interval: higher order coverage error indicates faster convergence1 . Though the theoretical conditions for these guarantees are somewhat restrictive [29],
bootstrapping has nevertheless proved very useful in practice for more general data [4, 21].
With the bootstrapped samples, a percentile-t (studentized) interval is constructed by
?
?
P (T ? (2Tn ? T1??/2
, 2Tn ? T?/2
)) ? 1 ? ?
?
?
where T?? is the ? sample quantile of Tn,1
, . . . , Tn,B
. Usually, the ?-quantile of an ordered population of size n is the continuous sample quantile:
?
?
(1 ? r)Tn,j
+ rTn,j+1
where j = bn?c + m, r = n? ? j + m
where m is dependent on quantile type, with m =
?+1
3
common for non-normal distributions.
The remaining question is how to bootstrap from the sequence of samples. In the next section, we
describe the block bootstrap, applicable to Markov processes, which we will show represents the
structure of data for value estimates in reinforcement learning.
1
More theoretically, coverage error is the approximation error in the Edgeworth expansions used to approximate the distribution in bootstrap proofs.
3
3.2.1
Moving Block Bootstrap
In the moving block bootstrap method, blocks of consecutive samples are drawn with replacement
from a set of overlapping blocks, making the k-th block {xk?1+t : t = 1 . . . , l}. The bootstrap
resample is the concatenation of n/l blocks chosen randomly with replacement, making a time
series of length n; B of these concatenated resamples are used in the bootstrap estimate. The
block bootstrap is appropriate for sequential processes because the blocks implicitly maintain a
time-dependent structure. An common heuristic for the block length, l, is n1/3 [8].
The moving block bootstrap was designed for stationary, dependent data; however, our scenario
involves nonstationary data. Lahiri [12] proved a coverage error of o(n?1/2 ) when applying the
moving block bootstrap to nonstationary, dependent data, better than the normal coverage error.
Fortunately, the conditions are not restrictive for our scenario, described further in the next section.
Note that there are other bootstrapping techniques applicable to sequential, dependent data with
lower coverage error, such as the double bootstrap [13], block-block bootstrap [1] and Markov or
Sieve bootstrap [28]. In particular, the Markov bootstrap has been shown to have a lower coverage error for Markov data than the block bootstrap under certain restricted conditions [10]. These
techniques, however, have not been shown to be valid for nonstationary data.
4
Confidence intervals for continuous-state Markov decision processes
In this section, we present a theoretically sound approach to constructing confidence intervals for
parametrized Q(s, a) using bootstrapping for dependent data. We then discuss how to address sparse
representations, such as tile coding, which make confidence estimation more complicated.
4.1
Bootstrapped Confidence Intervals for Global Representations
The goal is to compute a confidence estimate for Q(st , at ) on time step t. Assume that we are
learning a parametrized value function Q(s, a) = f (?, s, a), with ? ? Rd and a smooth function
f : Rd ? S ? A ? R. A common example is a linear value function Q(s, a) = ? T ?(s, a), with
? : S ? A ? Rd . During learning, we have a sequence of changing weights, {? 1 , ? 2 , . . . , ? n } up
to time step n, corresponding to the random process {?1 , . . . , ?n }. If this process were stationary,
then we could compute an interval around the mean of the process. In almost all cases, however, the
process will be nonstationary with means {?1 , . . . , ?n }. Instead, our goal is to estimate
f?n (s, a) = n?1
n
X
E[f (?t , s, a)]
t=1
? for any given state-action
which represents the variability in the current estimation of the function Q
pair, (s, a) ? S ? A. Because Q is parametrized, the sequence of weights, {?t }, represents the
variability for the uncountably many state-action pairs.
Assume that the weight vector on time step t + 1 is drawn from the unknown distribution
Pa [(?t+1 , st+1 )|(? t , st ), . . . , (? t?k , st?k )], giving a k-order Markov dependence on previous
states and weight vectors. Notice that Pa incorporates P and R, using st , ? t (giving the policy
?) and R to determine the reward passed to the algorithm to then obtain ? t+1 . This allows the learning algorithm to select actions using confidence estimates based on the history of the k most recent
?, without invalidating that the sequence of weights are drawn from Pa . In practice, the length of
the dependence, k, can be estimated using auto-correlation [2].
Applying the Moving Block Bootstrap method to a non-stationary sequence of ??s requires several
assumptions on the underlying MDP and the learning algorithm. We require two assumptions on the
underlying MDP: a bounded density function and a strong mixing requirement. The assumptions
on the algorithm are less strict, only requiring that the algorithm be non-divergent and produce a
sequence of {Qt (s, a)} that 1) satisfy a smoothness condition (a dependent Cramer condition), 2)
have a bounded twelfth moment and 3) satisfy an m-dependence relation where sufficiently separated Qi (s, a), Qj (s, a) are independent. Based on these assumptions (stated formally in the supplement), we can prove that the moving block bootstrap produces an interval with a coverage error
of o(n?1/2 for the studentized interval on fn (s, a).
4
Theorem 1 Given that Assumption 1-7 are satisfied and there exists constants C1 , C2 > 0, 0 <
? ? ? < 1/4 such that C1 n? < l < C2 n? (i.e. l increases with n), then the moving block bootstrap
produces a one-sided confidence interval that is consistent and has a coverage error of o(n?1/2 ) for
the studentization of the mean of the process {f (? t , s, a)}, where Qt (s, a) = f (? t , s, a).
The proof for the above theorem follows Lahiri?s proof [12] for the coverage error of the moving
block bootstrap for nonstationary data. The general approach for coverage error proofs involve
approximating the unknown distribution with an Edgeworth expansion (see [7]), with the coverage
error dependent on the order of the expansion, similar to the the idea of a Taylor series expansion.
Assuming Pa is k-order Markov results in two important practical implications on the learning
algorithm: 1) inability to use eligibility traces and 2) restrictions on updates to parameters (such
as the learning rate). These potential issues, however, are actually not restrictive. First, the tail of
eligibility traces has little effect, particularly for larger k; the most recent k weights incorporate the
most important information for the eligibility traces. Second, the learning rate, for example, cannot
be updated based on time. The learning rate, however, can still be adapted based on changes between
weight vectors, a more principled approach taken, by the meta-learning algorithm, IDBD [24].
The final algorithm is summarized in the pseudocode below. In practice, a window of data of length
w is stored due to memory restrictions; other data selection techniques are possible. Corresponding
? a)), (Q? , . . . , Q? i, M ) the
to the notation in Section 3.2, Qi represents the data samples (of Q(s,
i,1
?
dependently sampled blocks for the ith resample and Ti the mean of the i resample.
Algorithm 1 GetUpperConfidence(f (?, s, a), {? n?w , . . . ? n }, ?)
l = block length, B = num bootstrap resamples
last w weights and confidence level ? (= 0.05)
1: QN ? {f (? n?w , s, a), . . . f (? n , s, a)}
2: Blocks = {[Qn?w , . . . , Qn?w+l?1 ], [Qn?w+1 , . . . , Qn?w+l ], . . . , [Qn?l+1 , . . . , Qn ]}
3: M ? bw/lc
the number of length lblocks to sample with replacement and concatenate
4: for all i = 1 to B do
5:
(Q?1 , Q?2 , . .P
. , Q?M ?l ) ? concatMRandomBlocks(Blocks, M)
1
Q?j
6:
Ti? = M ?l
7: end for
?
})
8: sort({T1? , . . . , TB
B?
?+2
?+2
9: j ? b 2 + 6 c, r ? B?
2 + 6 ?j
?
?
?
10: T?/2 ? (1 ? r)Tj + rTj+1
?
11: Return 2mean(QN ) ? T?/2
4.2
Bootstrapped Confidence Intervals for Sparse Representations
We have shown that bootstrapping is a principled approach for computing intervals for global representations; sparse representations, however, complicate the solution. In an extreme case, for example, for linear representations, features active on time step t may have never been active before.
Samples Q1 (st , at ), . . . , Qt (st , at ) would therefore all equal Q0 (st , at ), because the weights would
have never been updated for those features. Consequently, the samples erroneously indicate low
variance for Q(st , at ).
We propose that, for sparse linear representations, the samples for the weights can be treated independently and still produce a reasonable, though currently unproven, bootstrap interval. Notice that
for ?(i) the ith feature
Pa [(? t , st )|(? t?1 , st?1 ), . . . , (? t?k , st?k )] = ?di=1 Pa [(?t (i), st )|(? t?1 , st?1 ), . . . , (? t?k , st?k )]
because updates to weights ?(i), ?(j) are independent given the previous states and weights vectors
for all i, j ? {1, . . . , d}. We could, therefore, estimate upper confidence bounds on the individual
Pd
weights, ucbi (s, a), and then combine them, via ucb(s, a) = i=1 ucbi (s, a) ? ?i (s, a), to produce
an upper confidence bound on Q(st , at ). To approximate the variance of ?(i) on time step t, we can
use the last w samples of ?(i) where ?(i) changed.
5
Proving coverage error results for sparse representations will require analyzing the covariance between components of ? over time. The above approach for sparse representations does not capture
this covariance; due to sparsity, however, the dependence between many of the samples for ?(i)
and ?(j) will likely be weak. We could potentially extend the theoretical results by bounding the
covariance between the samples and exploiting independencies. The means for individual weights
could likely be estimated separately, therefore, and still enable a valid confidence interval. In future
work, a potential extension is to estimate the covariances between the individual weights to improve
the interval estimate.
5
Applications of confidence intervals for reinforcement learning
The most obvious application of interval estimation is to bias exploration to select actions with
high uncertainty. Confidence-based exploration should be comparable to optimistic initialization
in domains where exhaustive search is required and find better policies in domains where noisy
rewards and noisy dynamics can cause the optimistic initialization to be prematurely decreased and
inhibit exploration. Furthermore, confidence-based exploration reduces parameter tuning because
the policy does not require knowledge of the reward range, as in softmax and optimistic initialization.
Confidence-based exploration could be beneficial in domains where the problem dynamics and reward function change over time. In an extreme case, the agent may converge to a near-optimal policy
before the goal is teleported to another portion of the space. If the agent continues to act greedily
with respect to its action-value estimates without re-exploring, it may act sub-optimally indefinitely.
These tracking domains require that the agent ?notice? that its predictions are incorrect and begin
searching for a better policy. AN example of a changing reward signals arises in interactive teaching.
In this scenario, the a human teaching shapes the agent by providing a drifting reward signal. Even
in stationary domains, tracking the optimal policy may be more effective than converging due to the
non-stationarity introduced by imperfect function approximation [26].
Another potential application of confidence estimation is to automate parameter tuning online. For
example, many TD-based reinforcement learning algorithms use an eligibility parameter (?) to address the credit assignment problem. Learning performance can be sensitive to ?. There has been
little work, however, exploring the effects of different decay functions for ?; using different ? values
for each state/feature; or for meta-learning ?. Confidence estimates could be used to increase ?
when the agent is uncertain, reflecting and decrease ? for confident value estimates [25].
Confidence estimates could also be used to guide the behaviour policy for a parallel multi-task
reinforcement learning system. Due to recent theoretical developments [15], several target value
functions can be learned in parallel, off-policy, based on a single stream of data from a behaviour
policy. The behaviour policy should explore to provide samples that generalize well between the
various target policies, speeding overall convergence. For example, if one-sided intervals are maintained for each target value functions, the behaviour policy could select an action corresponding to
the maximal sum of those intervals. Exploration is then biased to highly uncertain areas where more
samples are required.
Finally, confidence estimates could be used to determine when features should be evaluated in a
feature construction algorithm. Many feature construction algorithms, such as cascade correlation
networks, interleave proposing candidate features and evaluation. In an online reinforcement learning setting, these methods freeze the representation for a fixed window of time to accurately evaluate
the candidate [20]. Instead of using a fixed window, a more principled approach is to evaluate the
features after the confidence on the weights of the candidate features reached some threshold.
6
Experimental Results
In this section, we provide a preliminary experimental investigation into the practicality of confidence estimation in continuous-state MDPs. We evaluate a naive implementation of the block bootstrap method for (1) exploration in a noisy reward domain, (2) automatically tuning ? in the Cartpole
domain and (3) tracking a moving goal in a navigation task. In all tests we used the Sarsa(?) learning algorithm with tile coding function approximation (see Sutton and Barto [25]). All experiments
were evaluated using RL-Glue [27] and averaged over 30 independent runs.
6
2000
softmax
900
1800
800
Cummulative Reward
Average steps until termination
1000
Optimistic
700
600
500
400
e-greedy
Normal
300
200
Confidence Exploration
100
0
Confidence Exploration
1600
1400
1200
Normal
1000
800
600
softMax
400
e-greedy
200
Optimistic
0
0
20
40
60
80
100
120
140
160
180
-200
200
Episode Number
0
20
40
60
80
100
120
140
160
180
200
Episode Number
(a) Exploration: convergence
(b) Exploration: comparison
Figure 1: Results showing (a) convergence of various exploration techniques in the navigation task
and (b) average cumulative reward of various exploration techniques on the navigation task.
6.1
Exploration
To evaluate the effectiveness of confidence-based exploration, we use a simple two-goal continuous
navigation task. The small goal yields a reward of 1.0 on every visit. The flashing goal yields a
reward selected uniformly from {100, ?100, 5, ?5, 50}. The reward on all other steps is zero and
? = 0.99 (similar results for -1 per step and ? = 1.0). The agent?s observation is a continuous (x, y)
position and actions move the agent {N,S,E,W} perturbed by uniform noise 10% of the time. We
present only the first 200 episodes to highlight early learning performance.
Similar to Kaelbling, we select the action with the highest upper confidence in each state. We
compare our confidence exploration algorithm to three baselines commonly used in continuous state
MDPs: (1) -greedy (selecting the highest-value action with probability 1 ? , random otherwise),
(2) optimistic initialization (initializing all weights to a high fixed value to encourage exploration)
and (3) softmax (choosing actions probabilistically according to their values). We also compare our
algorithm to an exploration policy using normal (instead of bootstrapped) intervals to investigate the
effectiveness of making simplifying assumptions on the data distribution. We present the results for
the best parameter setting for each exploration policy for clarity. Figure 1 summarizes the results.
The -greedy policy convergences slowly to the small goal. The optimistic policy slowly converges
to the small goal for lower initializations and does not favour either goal for higher initializations.
The softmax policy navigates to the small goal on most runs and also convergences slowly. The
normal-interval exploration policy does prefer the flashing goal but not as quickly as the bootstrap
policy. Finally, the bootstrap-interval exploration policy achieves highest cumulative reward and is
the only policy that converges to the flashing goal, despite the large variance in the reward signal.
6.2
Adjusting Lambda
To illustrate the effect of adjusting ? based on confidence intervals, we study the Cartpole problem.
We selected Cartpole because the performance of Sarsa is particularly sensitive to ? in this domain.
The objective in Cartpole is to apply forces to a cart on a track to keep a pole from falling over.
An episode ends when the pole falls past a given angle or the cart reaches the end of the track.
The reward is +1 for each step of the episode. The agent?s observations are the cart position and
velocity and the poles? angle and angular velocity. The Cartpole environment is based on Sutton and
Barto?s [25] pole-balancing task and is available in RL-library [27].
To adjust the ? value, we reset ? on every time step: ? = normalized(ucb) where ucb = 0.9 ?
ucb + 0.1 ? getUpperConfidence(?(s, a), ?, ?). The confidence estimates were only used to adjust ?
for clarity: exploration was performed using optimistic initialization. Figure 2 presents the average
balancing time on the last episode for various values of ?. The flat line depicts the average balancing
time for Sarsa with ? tuned via confidence estimates. Setting ? via confidence estimates achieves
performance near the best value of ?. We also tested adjusting ? using normal confidence intervals,
however, the normal confidence intervals resulted in worse performance then any fixed value of ?.
7
Average steps until termination
300
Figure 2: Performance of Sarsa(?) on
Cartpole for various values of ?. The
straight line depicts the performance
of Sarsa with ? adjusted using the
confidence estimation algorithm.
400
500
600
700
800
900
1000
0.0
0.1
0.5
0.9
1.0
Lambda
6.3
Non-stationary Navigation Task
One natural source of non-stationarity is introduced by shaping a robot through successive approximations to a goal task (e.g., changing the reward function). We studied the effects of this form
of non-stationarity, where the agent learns to go to a goal and then another, better goal becomes
available (near the first goal to better guide it to the next goal). In our domain, the agent receives -1
reward per step and +10 at termination in a goal region. After 150 episodes, the goal region is teleported to a new location within 50 steps of the previous goal. The agent receives +10 in the new goal
and now 0 in the old goal. We used = 0 to enable exploration only with optimistic initialization.
We recorded the number of times the agent converged to the new goal with the change after an
initial learning period of 150 episodes. The bootstrap-based explorer found the new goal 70% of
the time. It did not always find the new goal because the -1 structure biased it to stay with the
safe 0 goal. Interestingly, optimistic initialization was unable to find the new goal because of this
bias, illustrating that the confidence-based explorer detected the increase in variance and promoted
re-exploration automatically.
7
Conclusion
In this work, we investigated constructing confidence intervals on value estimates in the continuousstate reinforcement learning setting. We presented a robust approach to computing confidence estimates for function approximation using bootstrapping, a nonparametric estimation technique. We
proved that our confidence estimate has low coverage error under mild assumptions on the learning
algorithm. In particular, we did so even for a changing policy that uses the confidence estimates. We
illustrated the usefulness of our estimates for three applications: exploration, tuning ? and tracking.
We are currently exploring several directions for future work. We have begun testing the confidencebased exploration on a mobile robot platform. Despite the results presented in this work, many
traditional deterministic, negative cost-to-goal problems (e.g., Mountain Car, Acrobot and Puddle
World) are efficiently solved using optimistic exploration. Robotic tasks, however, are often more
naturally formulated as continual learning tasks with a sparse reward signal, such as negative reward
for bumping into objects, or a positive reward for reaching some goal. We expect confidence based
techniques to perform better in these settings where the reward range may be truly unknown (e.g.
generated dynamically by a human teacher) and under natural variability in the environment (noisy
sensors and imperfect motion control). We have also begun evaluating confidence-interval driven
behaviour for large-scale, parallel off-policy learning on the same robot platform.
There are several potential algorithmic directions, in addition to those mentioned throughout this
work. We could potentially improve coverage error by extending other bootstrapping techniques,
such as the Markov bootstrap, to non-stationary data. We could also explore the theoretical work
on exponential bounds, such as the Azuma-Hoeffding inequality, to obtain different confidence estimates with low coverage error. Finally, it would be interesting to extend the theoretical results in
the paper to sparse representations.
Acknowledgements: We would like to thank Csaba Szepesv?ari, Narasimha Prasad and Daniel Lizotte for their helpful comments and NSERC, Alberta Innovates and the University of Alberta for
funding the research.
8
References
[1] D.W.K. Andrews. The block-block bootstrap: Improved asymptotic refinements.
72(3):673?700, 2004.
Econometrica,
[2] G.E.P. Box, G.M. Jenkins, and G.C. Reinsel. Time series analysis: forecasting and control. Holden-day
San Francisco, 1976.
[3] R. I. Brafman and M. Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213?231, 2002.
[4] A.C. Davison and DV Hinkley. Bootstrap methods and their application. Cambridge Univ Pr, 1997.
[5] E. Delage and S. Mannor. Percentile optimization for Markov decision processes with parameter uncertainty. Operations Research, 58(1):203, 2010.
[6] Y. Engel, S. Mannor, and R. Meir. Reinforcement learning with Gaussian processes. In Proceedings of
the 22nd international conference on Machine learning, page 208. ACM, 2005.
[7] P Hall. The bootstrap and Edgeworth expansion. Springer Series in Statistics, Jan 1997.
[8] Peter Hall, Joel L. Horowitz, and Bing-Yi Jing. On blocking rules for the bootstrap with dependent data.
Biometrika, 82(3):561?74, 1995.
[9] Todd Hester and Peter Stone. Generalized model learning for reinforcement learning in factored domains.
In The Eighth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2009.
[10] J.L. Horowitz. Bootstrap methods for Markov processes. Econometrica, 71(4):1049?1082, 2003.
[11] Leslie P. Kaelbling. Learning in Embedded Systems (Bradford Books). The MIT Press, May 1993.
[12] SN Lahiri. Edgeworth correction by moving blockbootstrap for stationary and nonstationary data. Exploring the Limits of Bootstrap, pages 183?214, 1992.
[13] S. Lee and PY Lai. Double block bootstrap confidence intervals for dependent data. Biometrika, 2009.
[14] L. Li, M.L. Littman, and C.R. Mansley. Online exploration in least-squares policy iteration. In Proc. of
The 8th Int. Conf. on Autonomous Agents and Multiagent Systems, volume 2, pages 733?739, 2009.
[15] H.R. Maei, C. Szepesv?ari, S. Bhatnagar, and R.S. Sutton. Toward off-policy learning control with function
approximation. ICM (2010), 50, 2010.
[16] S. Mannor, D. Simester, P. Sun, and J.N. Tsitsiklis. Bias and variance in value function estimation. In
Proceedings of the twenty-first international conference on Machine learning, page 72. ACM, 2004.
[17] Lilyana Mihalkova and Raymond J. Mooney. Using active relocation to aid reinforcement learning. In
FLAIRS Conference, pages 580?585, 2006.
[18] Peter Stone Nicholas K. Jong. Model-based exploration in continuous state spaces. In The 7th Symposium
on Abstraction, Reformulation, and Approximation, July 2007.
[19] A. Nouri and M.L. Littman. Multi-resolution exploration in continuous spaces. In NIPS, pages 1209?
1216, 2008.
[20] Franc?ois Rivest and Doina Precup. Combining td-learning with cascade-correlation networks. In ICML,
pages 632?639, 2003.
[21] J. Shao and D. Tu. The jackknife and bootstrap. Springer, 1995.
[22] A.L. Strehl and M.L. Littman. An empirical evaluation of interval estimation for markov decision processes. In Proc. of the 16th Int. Conf. on Tools with Artificial Intelligence (ICTAI04), 2004.
[23] Alexander L. Strehl and Michael L. Littman. Online linear regression and its application to model-based
reinforcement learning. In NIPS, 2007.
[24] R.S. Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In Proceedings
of the National Conference on Artificial Intelligence, pages 171?171, 1992.
[25] R.S. Sutton and A.G. Barto. Introduction to reinforcement learning. MIT Press Cambridge, USA, 1998.
[26] R.S. Sutton, A. Koop, and D. Silver. On the role of tracking in stationary environments. In Proceedings
of the 24th international conference on Machine learning, page 878. ACM, 2007.
[27] Brian Tanner and Adam White. RL-Glue : Language-independent software for reinforcement-learning
experiments. JMLR, 10:2133?2136, September 2009.
[28] B A. Turlach. Bandwidth selection in kernel density estimation: A review. In CORE and Institut de
Statistique, 1993.
[29] J. Zvingelis. On bootstrap coverage probability with dependent data. Computer-Aided Econ., 2001.
9
| 4090 |@word mild:1 trial:1 exploitation:1 illustrating:1 innovates:1 achievable:1 interleave:1 polynomial:1 nd:1 glue:2 twelfth:1 turlach:1 termination:3 gptd:1 bn:1 simplifying:2 covariance:4 prasad:1 q1:1 moment:1 initial:1 series:4 selecting:5 daniel:1 tuned:1 bootstrapped:5 interestingly:1 past:1 current:2 yet:1 must:2 fn:1 concatenate:1 shape:1 designed:1 update:3 resampling:1 stationary:9 greedy:5 selected:2 intelligence:2 xk:1 version:1 ith:2 core:1 indefinitely:1 num:1 davison:1 provides:1 mannor:4 complication:1 location:1 successive:1 constructed:1 c2:2 become:1 symposium:1 incorrect:1 prove:3 consists:1 introductory:1 combine:1 introduce:1 theoretically:2 expected:1 multi:3 discounted:1 alberta:4 automatically:3 td:2 little:2 window:3 becomes:1 provided:1 estimating:3 moreover:1 underlying:5 bounded:2 begin:1 notation:1 rivest:1 what:1 mountain:1 rmax:4 proposing:1 csaba:1 bootstrapping:14 guarantee:2 temporal:1 every:2 ti:2 act:2 stateaction:1 interactive:2 continual:1 biometrika:2 control:3 normally:1 unit:1 t1:2 before:2 positive:1 local:1 todd:1 limit:1 sutton:6 despite:2 analyzing:1 initialization:9 studied:3 dynamically:1 challenging:2 limited:2 range:2 averaged:1 directed:1 practical:2 testing:1 practice:4 block:27 edgeworth:4 bootstrap:36 procedure:1 jan:1 area:1 delage:1 empirical:3 cascade:2 adapting:1 confidence:79 radial:1 statistique:1 argmaxa:1 seeing:1 cannot:1 selection:3 applying:2 py:1 restriction:2 deterministic:1 demonstrated:1 go:1 independently:1 resolution:1 mansley:1 factored:1 rule:1 studentization:1 population:1 proving:1 notion:1 searching:1 autonomous:2 updated:2 controlling:1 target:3 construction:2 ualberta:2 bumping:1 us:3 pa:6 velocity:2 approximated:1 particularly:4 continues:1 blocking:1 observed:1 role:1 initializing:1 capture:1 solved:1 region:2 sun:1 episode:8 trade:1 highest:4 inhibit:1 decrease:1 principled:3 mentioned:1 environment:11 pd:1 reward:31 econometrica:2 littman:4 dynamic:2 basis:2 shao:1 various:5 separated:1 univ:1 describe:2 effective:1 detected:1 artificial:2 outside:1 choosing:1 exhaustive:1 quite:2 heuristic:3 larger:1 otherwise:1 statistic:6 noisy:4 final:1 online:7 sequence:6 propose:2 interaction:3 maximal:2 reset:1 remainder:1 tu:1 combining:1 mixing:1 achieve:1 exploiting:1 convergence:5 double:2 requirement:2 extending:1 jing:1 produce:7 adam:2 converges:3 incremental:1 object:1 silver:1 illustrate:2 andrew:1 qt:3 strong:1 coverage:20 ois:1 c:2 indicate:1 involves:1 direction:2 safe:1 exploration:39 human:2 enable:3 require:4 behaviour:7 preliminary:2 investigation:2 brian:1 sarsa:5 adjusted:1 extension:2 exploring:4 correction:1 around:3 credit:2 cramer:1 normal:13 deciding:1 sufficiently:1 hall:2 algorithmic:1 automate:1 achieves:2 early:2 consecutive:1 resample:3 estimation:18 proc:2 applicable:2 currently:3 sensitive:2 engel:1 tool:1 mit:2 sensor:1 gaussian:3 always:1 reaching:3 pn:3 reinsel:1 mobile:1 barto:3 probabilistically:1 focus:1 bernoulli:1 likelihood:1 indicates:1 greedily:1 baseline:1 lizotte:1 helpful:1 dependent:17 abstraction:1 typically:1 holden:1 a0:1 relation:1 selects:1 interested:1 issue:1 overall:1 development:1 platform:2 softmax:5 equal:2 construct:2 never:2 sampling:1 represents:6 icml:1 future:4 few:1 employ:1 franc:1 randomly:1 resulted:1 national:1 individual:3 replacement:3 bw:1 maintain:2 n1:1 stationarity:4 investigate:2 highly:1 evaluation:2 adjust:2 joel:1 navigation:5 extreme:2 truly:1 behind:1 tj:1 implication:1 accurate:3 encourage:1 cartpole:6 institut:1 tree:2 hester:1 taylor:1 old:1 re:2 theoretical:6 uncertain:2 assignment:1 leslie:1 applicability:1 kaelbling:4 addressing:1 pole:4 cost:1 uniform:1 usefulness:1 too:1 optimally:1 stored:1 dependency:1 perturbed:1 teacher:1 confident:1 st:21 density:2 international:4 stay:1 lee:1 rewarding:1 receiving:2 off:4 michael:1 tanner:1 quickly:2 precup:1 ucbi:2 satisfied:1 recorded:1 slowly:3 tile:3 hoeffding:1 lambda:2 worse:1 horowitz:2 book:1 conf:2 return:3 li:1 potential:6 de:1 coding:3 summarized:1 relocation:1 int:2 satisfy:3 doina:1 stream:1 performed:1 optimistic:11 portion:1 start:1 sort:1 reached:1 parallel:4 complicated:2 contribution:2 square:1 variance:6 efficiently:2 maximized:1 yield:2 generalize:1 weak:1 bayesian:2 accurately:1 bhatnagar:1 straight:1 mooney:1 history:1 converged:1 reach:1 complicate:1 finitestate:1 mihalkova:1 obvious:1 naturally:1 proof:4 di:1 gain:1 sampled:2 proved:3 adjusting:5 begun:2 knowledge:1 car:1 credible:1 shaping:1 actually:1 reflecting:1 higher:2 day:1 response:1 maximally:1 improved:1 evaluated:2 though:3 box:1 convergence1:1 furthermore:1 angular:1 varp:1 correlation:3 until:2 receives:2 lahiri:3 overlapping:1 mdp:4 usa:1 effect:4 validity:1 requiring:1 true:2 normalized:1 sieve:1 dependently:1 q0:1 illustrated:1 white:3 during:2 eligibility:4 maintained:1 percentile:2 flair:1 generalized:1 stone:2 demonstrate:1 tn:12 motion:1 nouri:2 resamples:2 recently:1 ari:2 funding:1 common:4 pseudocode:1 rl:3 volume:1 tail:1 extend:2 refer:1 freeze:1 cambridge:2 smoothness:1 tuning:6 rd:3 teaching:3 language:1 moving:10 robot:3 kwik:3 navigates:1 continuousstate:1 posterior:1 recent:3 driven:1 scenario:3 store:1 certain:2 meta:2 inequality:1 yi:1 fortunately:1 somewhat:1 promoted:1 determine:3 converge:2 maximize:1 period:1 signal:6 july:1 sound:1 reduces:1 smooth:1 faster:1 calculation:1 lai:1 visit:1 studentized:2 prediction:2 qi:2 converging:1 regression:1 koop:1 iteration:1 kernel:1 whitem:1 c1:2 background:1 want:1 whereas:1 separately:1 interval:58 decreased:1 addition:1 szepesv:2 source:1 biased:2 strict:1 comment:1 cummulative:1 cart:3 incorporates:1 effectiveness:2 nonstationary:6 near:4 bandwidth:1 reduce:1 idea:2 imperfect:2 qj:1 favour:1 passed:1 forecasting:1 peter:3 cause:1 action:25 useful:3 detailed:1 involve:1 nonparametric:1 discount:1 extensively:1 meir:1 notice:3 estimated:4 mbie:2 per:2 track:2 delta:2 econ:1 discrete:3 visitation:1 key:1 independency:1 reformulation:1 threshold:2 nevertheless:1 falling:1 drawn:4 changing:6 clarity:2 undertaken:1 sum:2 run:2 angle:2 uncertainty:4 family:1 reasonable:2 reader:1 almost:1 throughout:1 decision:8 summarizes:1 prefer:1 comparable:1 bound:6 guaranteed:1 lilyana:1 adapted:1 x2:1 flat:1 software:1 erroneously:1 span:1 hinkley:1 jackknife:1 department:2 according:2 teleported:2 combination:1 kd:1 describes:1 beneficial:1 making:6 dv:1 restricted:1 pr:1 sided:2 taken:1 previously:2 remains:1 discus:2 count:2 mechanism:1 bing:1 know:4 end:3 available:2 jenkins:1 operation:1 apply:1 appropriate:4 nicholas:1 robustly:1 frequentist:1 drifting:1 remaining:1 exploit:2 giving:3 practicality:2 restrictive:3 quantile:4 concatenated:1 approximating:1 classical:3 objective:2 move:1 question:1 parametric:1 dependence:5 rt:3 traditional:2 interacts:2 said:1 unproven:1 gradient:1 september:1 unable:1 thank:1 concatenation:1 parametrized:3 toward:2 assuming:2 length:6 modeled:1 providing:1 minimizing:1 balance:1 difficult:1 potentially:3 trace:3 stated:1 negative:2 implementation:1 policy:30 unknown:6 perform:1 twenty:1 upper:6 discretize:1 observation:3 markov:15 finite:2 descent:1 variability:3 directing:1 prematurely:1 community:1 introduced:2 maei:1 pair:6 required:2 learned:4 nip:2 address:3 tennenholtz:1 bar:1 usually:3 below:1 eighth:1 azuma:1 biasing:1 sparsity:1 challenge:1 tb:1 including:1 memory:2 rtj:1 max:1 power:1 natural:3 treated:1 force:1 explorer:2 improve:2 mdps:5 library:1 rtn:1 auto:1 naive:1 speeding:1 sn:1 text:1 prior:1 raymond:1 acknowledgement:1 review:1 asymptotic:1 embedded:1 multiagent:2 expect:1 highlight:1 interesting:1 approximator:2 agent:26 xp:1 s0:4 consistent:1 balancing:3 strehl:2 uncountably:1 changed:1 brafman:1 last:3 free:3 tsitsiklis:1 guide:4 formal:1 bias:6 fall:1 sparse:10 distributed:2 benefit:1 feedback:1 xn:2 evaluating:3 transition:3 valid:2 qn:8 cumulative:2 world:1 collection:1 reinforcement:32 commonly:1 refinement:1 san:1 polynomially:1 approximate:3 implicitly:1 keep:1 global:2 active:3 robotic:1 conclude:1 francisco:1 continuous:15 search:1 table:1 promising:1 learn:1 robust:2 ca:2 obtaining:2 expansion:5 investigated:2 constructing:3 domain:12 did:2 main:1 bounding:1 noise:1 repeated:1 aamas:1 pivotal:1 icm:1 x1:1 simester:1 depicts:2 aid:1 lc:1 sub:1 position:2 exponential:1 candidate:3 jmlr:1 learns:1 theorem:2 transitioning:1 invalidating:1 showing:1 explored:1 divergent:1 decay:1 intractable:1 exists:1 restricting:1 sequential:3 supplement:1 flashing:3 acrobot:1 likely:4 explore:4 contained:1 ordered:1 tracking:7 nserc:1 scalar:3 springer:2 corresponds:1 satisfies:1 acm:3 goal:30 formulated:1 consequently:1 martha:1 change:3 aided:1 uniformly:1 called:1 bradford:1 experimental:2 ucb:4 puddle:1 rarely:1 select:5 formally:1 jong:1 mark:1 inability:1 arises:1 alexander:1 incorporate:1 evaluate:4 tested:1 |
3,413 | 4,091 | Active Learning Applied to Patient-Adaptive
Heartbeat Classification
John V. Guttag
CSAIL, MIT
[email protected]
Jenna Wiens
CSAIL, MIT
[email protected]
Abstract
While clinicians can accurately identify different types of heartbeats in electrocardiograms (ECGs) from different patients, researchers have had limited success
in applying supervised machine learning to the same task. The problem is made
challenging by the variety of tasks, inter- and intra-patient differences, an often
severe class imbalance, and the high cost of getting cardiologists to label data
for individual patients. We address these difficulties using active learning to perform patient-adaptive and task-adaptive heartbeat classification. When tested on
a benchmark database of cardiologist annotated ECG recordings, our method had
considerably better performance than other recently proposed methods on the two
primary classification tasks recommended by the Association for the Advancement of Medical Instrumentation. Additionally, our method required over 90%
less patient-specific training data than the methods to which we compared it.
1
Introduction
In 24 hours an electrocardiogram (ECG) can record over 100,000 heartbeats for a single patient.
Of course, a physician is not likely to look at all of them. Automated analysis of long-term ECG
recordings can help physicians understand a patient?s physiological state and his/her risk for adverse
cardiovascular outcomes [1] [2]. Often, an important step in such analysis is labeling the different
types of heartbeats. This labeling reduces an ECG to a set of symbols transferable across patients.
Trained clinicians can successfully identify over a dozen different types of heartbeats in ECG recordings. However, researchers have had limited success using supervised machine learning techniques
to do the same. The problem is made challenging by the inter-patient differences present in the morphology and timing characteristics of the ECGs produced by compromised cardiovascular systems.
The variation in the physiological systems that produce the data means that a classifier trained on
even a large set of patients will yield unpredictable results when applied to a new cardiac patient.
For this reason, global classifiers are highly unreliable and therefore not widely used in practice [3].
Hu et al was one of the first to describe an automatic patient-adaptive ECG beat classifier [4]. It
distinguished ventricular ectopic beats (VEBs), from non-VEBs. This work employed a mixture of
experts approach, combining a global classifier with a local classifier trained on the first 5 minutes of
the test patient?s record. Similarly, de Chazal et al augmented the performance of a global heartbeat
classifier by including patient-specific expert knowledge for each test patient. Their local classifier
was trained on the first 500 labeled beats of each record [3]. More recently, Ince et al developed
a patient-adaptive classification scheme using artificial neural networks by incorporating the first 5
minutes of each test recording in the training set [5] .
Based on the results from these three studies, it is clear that patient-adaptive classifiers provide
increased classification accuracy. Unfortunately, patient-adaptive classifiers are not used in practice
because they require an unrealistic amount of labor to produce a cardiologist-labeled patient-specific
training set. Furthermore, by sampling all of the patient-specific training data from one portion of
1
the ECG, one is at risk for over-fitting to that patient?s physiological state in time. Given a longterm record, which is likely to contain high intra-patient differences, it is likely that constructing the
training set in this manner will not yield a good representation of the patient?s ECG.
There has been some success with hand-coded rule-based algorithms for heartbeat classification.
Hamilton et al developed a rule-based algorithm for detecting one type of particularly dangerous
ectopic heartbeat, the premature ventricular contraction (PVC) [6]. While reasonably accurate, rulebased algorithms are inflexible, since they can only be used for a single classification task. And to be
useful in practice, a classifier should not only be capable of adapting to new patients, but also to new
classification problems, since the classification task in question can change depending on the patient
or even the clinician. Since the field of ECG research is continuously evolving, tools to analyze the
signal should be capable of adapting.
In this paper, we show how active learning can be successfully applied to the problems of both
patient-adaptive and task-adaptive heartbeat classification. We developed our method with a clinical setting in mind: initially it requires no labeled data, it has no user-specified parameters, and
achieves good performance on an imbalanced data set. Applied to data from the MIT-BIH Arrhythmia Database our method outperforms current state-of-the-art machine learning heartbeat classification techniques and uses less training data. Moreover, our approach outperforms a rule-based
algorithm designed to detect an important class of abnormal beat. Finally, we discuss how the classification method performed when used in a prospective experiment with two cardiologists.
2
Background
We begin with a brief background on the signal of interest, the ECG. Since we will consider different heartbeat classification tasks we first present a few examples of heartbeat classes and ECG
abnormalities.
2.1
The ECG and ECG Abnormalities
An ECG records a patient?s cardiac electrical activity by measuring the potential differences at the
surface of the patient?s body. In most healthy patients, the ECG, measured from Lead II, begins with
a P-wave, is followed by a QRS complex and ends with a T-wave. Figure 1(a) shows an example
of the ECG of a normal sinus rhythm beat (N). The exact morphology and timing of the different
portions of the wave depend on the patient and lead placement.
1.8
RR interval
Amplitude (mv)
Amplitude(mV)
1.2
1
0.8
0.6
QT interval
T
P
0.2
0
?0.2
0
Q
0.2
pre?RR
interval
2.5
1.4
0.4
1.4
3
R
post?RR
interval
1.2
2
1
1.5
0.8
Amplitude (mv)
1.6
1
0.5
0
0.6
0.4
0.2
?0.5
0
?1
?0.2
S
0.4
0.6
0.8
Time (s)
(a)
1
1.2
1.4
1.6
?1.5
0
0.5
1
1.5
Time(s)
(b)
2
2.5
3
?0.4
0
0.5
1
1.5
Time (s)
2
2.5
3
(c)
Figure 1: Normal sinus rhythm beats like the ones shown in (a) originate from the pacemaker cells
of the sinoatrial node. Premature ventricular contractions (b) and atrial premature beats (c) are two
examples of ectopic beats.
Cardiac abnormalities can disrupt the heart?s normal sinus rhythm, and, depending on their type
and frequency, can vary from benign to life threatening. Examples of ectopic beats (beats that
do not originate in the sinoatrial node) are shown in Figures 1(b) and 1(c). Premature ventricular
contractions (PVCs), originate in the ventricles instead of in the pacemaker cells of the sinoatrial
node. They are common in patients who have suffered an acute myocardial infarction [7] and may
indicate that a patient is at increased risk for more serious ventricular arrhythmias and sudden cardiac
death [8]. When the electrical impulse originates from the atria, an atrial premature beat is recorded
by the ECG as shown in Figure 1(c). Atrial premature beats tend not to be life threatening.
2
Because of their specific timing and morphology characteristics these two types of abnormal beats
are generally distinguishable by trained cardiologists, but there are many exceptions. Not only can
abnormalities vary from patient to patient, but the same recording may contain beats that belong
to the same class but all look quite different. Figure 2 shows an example of an ECG containing
multiform PVCs.
Figure 2: Each PVC is marked by a ?V? and each normal sinus rhythm beat is marked by a ???. The
PVC morphology varies greatly among patients and even within recordings from a single patient.
3
Methods
In this section we describe the two main components of our heartbeat classification scheme. We
begin, with the process of feature extraction and then present the classification method.
3.1
Feature Extraction
Before extracting feature vectors, we pre-process and segment the ECG. We used PhysioNet?s automated R-peak detector to detect the R-peaks of each heartbeat [9]. Next, we removed baseline
wander from the signals using the method described in [10]. Once pre-processed, the data was segmented into individual heartbeats based on fixed intervals before and after the R-peak, so that each
beat contained the same number of samples.
Our goal was to develop a feature vector that worked well not only across patients but also across different heartbeat classification tasks. This led us to use a combination of the ECG features proposed
in [10],[11], and [12]. The elements of the feature vector, x, are described in Table 1.
Table 1: Heartbeat features used in experiments.
Features
x1 , ..., x60
x61 , x62 , x63
x64 , x65 , x66
x67
Description
? Wavelet coefficients from the last 5 levels of a 6 level wavelet decomposition using
a Daubechies 2 wavelet
? The normalized energy in different segments of the beat
? The pre and post RR intervals normalized by a local average, and the average RR interval
? Morphological distance between the current beat the record?s median beat
The last, and most novel, feature in Table 1 is a measure of the morphological distance between
the represented beat and the median beat for a patient (recalculated every 500 beats). The feature is
based on the dynamic time warping algorithm used in [12] to measure the morphological distance
between a fixed interval that contains a portion of the Q-T intervals of two beats.
3.2
Classification
Our goal was to develop a clinically useful patient-adaptive heartbeat classification method for solving different binary heartbeat classification problems. We designed the classifier for use in a clinical
setting, where physicians have little time to label beats, let alone tune classifier parameters. Thus,
it was important that the method should require few cardiologist-labeled heartbeats, and have no
user-defined parameters. Based on these goals we developed the algorithm presented below, which
combines different ideas from the literature [13-16].
3
Inputs:
(a) Unlabeled data {x1 , ..., xn }
(b) Max number of initial clusters per clustering, k
(c) SVM cost parameter C
(d) Stopping precision
1. Cluster the data using hierarchical clustering with two different linkage criteria, yielding <= 2 ? k clusters.
2. Query the centroid of each cluster. Add these points to the initially empty set of labeled examples.
3. If the expert labeled all the points as belonging to the same class, stop, else k = 1.
4. Train a linear SVM based on the labeled examples.
5. Apply the SVM to all of the data.
6. If all data that lies on or within the margin is labeled, stop.
7. Re-cluster data that lie on or within the margin using hierarchical clustering with k = k + 1.
8. Query the point from each cluster that lies closest to the current SVM decision boundary.
9. Repeat steps 4-8 until the change in the margin is within of zero.
Many proposed techniques for SVM active learning assume one starts with some set of labeled data
or, as in [13], the initial training examples are randomly selected. In our application, we start with a
pool of completely unlabeled data. Furthermore, since there is often a severe class imbalance (e.g.,
some multi-thousand beat recordings contain less than a handful of PVCs), choosing a small or even
moderate number of random samples is unlikely to be an effective approach to finding representative
samples of a record. The choice of initial queries is crucial. If beats from only one class are queried
the algorithm could stop prematurely. More generally, the selection of the first set of queries is
independent of the binary task, and therefore the first query should contain at least one example
from each of the beat classes contained in the record. We use clustering in an effort to quickly
identify representative samples from each class.
We experimented with different clustering techniques before choosing hierarchical clustering. On
average hierarchical clustering outperformed other popular clustering techniques like k-means. We
believe this can be attributed to the fact that hierarchical clustering has the ability to produce a
variety of different clusters by modifying the linkage criterion. We chose to use two complementary
linkage criteria in attempt to address the intra-patient variation present in ECG records. The first
metric is average linkage. Average linkage defines the distance between two clusters, q and r, as the
average distance between all pairs of objects in q and r. This linkage is biased toward producing
clusters with similar variances, and has the tendency to merge clusters with small variances. The
second linkage criterion is Ward?s linkage [17], defined in Equation 1.
d(q, r) = ss(qr) ? [ss(q) + ss(r)]
(1)
where ss(qr) is the within-cluster sum of squares for the resulting cluster when q and r are combined. The within-cluster sum of squares, ss(x), is defined as the sum of squares of the distances
between all objects in the cluster and the centroid of the cluster:
nx
1 X
ss(x) =
|xi ?
xj |2
n
x j=1
i=1
nx
X
(2)
Using Ward?s linkage tends to join clusters with a small number of points, and is biased towards
producing clusters with approximately the same number of samples. If presented with an outlier,
Ward?s method tends to assign it to the cluster with the closest centroid, whereas the average linkage
tends to assign it to the densest cluster, where it will have the smallest impact on the maximum
variance [18].
Once the initial queries are labeled, we train a linear SVM, and apply this SVM to all of the data.
We use linear SVMs because most heartbeat classification tasks are close to linearly separable and
because linear SVMs require few tuning parameters. Next, we re-cluster the data on or within the
margin of the SVM, incrementing the max number of clusters with each iteration. We then query a
beat from each cluster that is closest to the SVM decision boundary.
As described above, our algorithm would halt when no unlabeled data lay on or within the margin.
For some records, however, e.g., those with fusion beats - a fusion of normal and abnormal beats
4
- many beats can lie within the margin of the SVM and thus a clinician might end up labeling
hundreds of beats that add little useful information. Intuitively, one should stop querying when
additional training data has little to no effect on the solution. The algorithm, therefore, terminates
when the change in the margin between iterations is within .
4
Experiments & Results
We implemented our algorithm in MATLAB, and used SV Mlight [19] to train the linear SVM at
each iteration. We held the cost parameter of the linear SVM constant, at C = 100, throughout all
experiments. This value was selected based on previous cross-validation experiments. The stopping
precision was held constant at = 10?3 . Typical ECG recordings contain beats from 2 to 5 classes
but can contain more; based on this a priori knowledge, we conservatively set k = 10. This value
was held constant throughout all experiments.
To test the utility of our proposed approach for heartbeat classification we ran a series of experiments on data from different patients, and for different classification tasks. First, we compare the
performance of a classifier obtained using our approach to two classifiers recently presented in the
literature. Next, we directly measure the impact active learning has on the classification of heartbeats by creating our own passive learning classifier using the same pre-processing and features as
our proposed active learning method. Finally, we test our method using actual cardiologists.
In our experiments we report the classification performance in terms of sensitivity (SE), specificity
(SP), and positive predictive value (PPV). As an overall measure of performance we use the F-score:
F =
2 ? SE ? P P V
SE + P P V
(3)
The F-score is a commonly-accepted performance evaluation measure in medicine and information
retrieval where one data class (often the positive class) is more important than the other [20]. We
use this measure since the problem of heartbeat classification suffers from severe class imbalance,
and thus the SE (aka recall) and the PPV (aka precision) are more important than SP.
4.1
Classification Performance
We tested performance on the MIT-BIH Arrhythmia Database (MITDB) [9], a widely used benchmark database that contains 48 half-hour ECG recordings, sampled at 360Hz, from 47 different
patients. Twenty-three of these records, labeled 100 to 124 were selected at random from a source
of 4000 recordings. The remaining 25 records, labeled 200 to 234 were selected because they contain rare clinical activity that might not have been represented had all 48 records been chosen at
random. The database contains approximately 109,000 cardiologist labeled heartbeats. Each beat
is labeled as belonging to one of 16 different classes. In some sense, the data in the MITDB is
too good. It was collected at 360Hz, which is a higher sampling rate than is typical for the Holter
monitors used to gather most long term clinical data. To simulate this kind of data, we resampled
the pre-processed ECG signal at 128Hz.
We consider the two main classification tasks proposed by the Association for the Advancement of
Medical Instrumentation (AAMI): detecting ventricular ectopic beats (VEBs), and detecting supraventricular ectopic beats (SVEBs). These two tasks have been the focus of other researchers investigating patient-adaptive heartbeat classification. Recently, Ince et al [5] and de Chazal et al
[3] described methods that combine global information with patient-specific information. Ince et al
trained a global classifier on 245 hand chosen beats from the MITDB, and then adapted the global
classifier by training on labeled data from the first five minutes of each test record. Their reported
results of testing on 44 of the 48 records - all records with paced beats were excluded - from the
MITDB are reported in Table 2. De Chazal et al trained their global classifier on all of the data from
22 patients in the MITDB, and then adapted the global classifier by training on labeled data for the
first 500 beats of each test record. Their reported results of testing on 22 records -different from the
ones used in the global training set- from the MITDB are also reported in Table 2.
For the same two classification tasks we tested our proposed approach and we report the results
when tested on the records reported on in [5] and [3]. In these experiments we exclude the queried
5
beats from the test set, testing only on data the expert hasn?t seen. This was also done in [5] and [3].
Since we query far fewer beats that the other methods, we end up testing on many more beats.
Table 2: Our proposed method outperforms other classifiers for two common classification tasks.
VEB
SVEB
SP
PPV
F-Score
Sens
Spec
PPV
F-Score
Ince et al
84.6% 98.7%
Proposed1
99.0% 99.9%
Chazal et al 94.3% 99.7%
99.6% 99.9%
Proposed2
1
for the 44 records in common
2
for the 22 records in common
87.4%
99.2%
96.2%
99.3%
86.0%
99.1%
95.2%
99.5%
63.5%
88.3%
87.7%
92.0%
99.0%
100.0%
96.2%
100.0%
53.7%
99.2%
47.0%
99.5%
58.2%
93.4%
61.2%
95.6%
Classifier
SE
As Table 2 shows, the method proposed here does considerably better than the methods proposed in
[5] and [3] for each task. For the task of classifying VEBs vs. non-VEBs, our method on average
used 45 labeled beats (compared to roughly 350 beats for [5] and 500 beats for [3]) per record. For
the task of detecting SVEBs, our method used even fewer labeled beats. Recognizing SVEBs is
considerably more difficult than detecting VEBs since the class imbalance problem is even more
severe and supra-ventricular beats are harder to distinguish from normal sinus rhythm beats.
Table 3: Our algorithm outperforms a rule-based classifier designed specifically for the task of
detecting PVCs.
SE
SP
PPV
F-Score
Hamilton et al 92.8%
99.0%
Proposed3
3
for all 48 records
98.4%
100.0%
79.5%
99.3%
85.7%
99.1%
Classifier
Hamilton et al proposed a rule-based classifier for classifying PVCs vs. non-PVCs. Their software
is freely available online, from eplimited.com. We applied their software to all of the records, see
Table 3. Their method does particularly poorly on the four records containing paced beats. Omitting
these four records the F-Score increases to 91.4%, still worse than our method. One advantage of
the rule-based algorithm is that it does not require a labeled training set, whereas on average we
require 45 labeled beats per record. However, unlike our method the rule-based algorithm can only
be used for one task.
4.2
The Impact of Active Learning
We hypothesize that the difference in performance between our method and the other learning-based
methods discussed above is attributable partly to the design of our feature vector and partly to the
method of choosing training data. In order to test this hypothesis we ran an experiment that directly
compares the effect of actively vs. passively selecting the training set, with all other parameters kept
the same (e.g., identical pre-processing, identical feature vectors, etc.).
For each of the 48 records in the MITDB we compare a VEB vs. non-VEB classifier using our
approach, to a linear SVM classifier trained on the first 500 beats of each record. For each patient
we record the number of queries made, as well as the performance of each classifier. Table 4 shows
the classification results for each method across all patients. The column headed ?#Q? gives the
number of beats used for training each classifier, while the column headed ?TP? for true positives,
gives the number of correctly labeled VEBs. The last row gives the totals across all records for each
classification method.
Overall, our classification approach achieves an F-score over 99%, and the passive technique
achieves an F-score of 94%. Compared to the passive approach, active learning used over 90%
less training data, and resulted in over 85% fewer misclassified heartbeats. These results emphasize that fact that active learning can be used to dramatically reduce the labor cost of producing
highly accurate classifiers. That the passive technique performed better than [5] and almost as well
as [3], despite not having any global training data, suggests that our feature vector provides some
advantage.
6
Table 4: Active versus passive learning. Active learning outperforms a passive approach, and uses
over 90% less data.
Active vs. Passive VEB Classification Results
Proposed
4.3
Passive
#Q
TP
TN
FP
FN
#Q
TP
TN
FP
FN
100
101
102
103
104
105
106
107
108
109
111
112
113
114
115
116
117
118
119
121
122
123
124
200
201
202
203
205
207
208
209
210
212
213
214
215
217
219
220
221
222
223
228
230
231
232
233
234
22
19
28
20
30
54
50
31
52
45
22
20
19
51
20
34
20
30
32
24
20
26
32
124
45
41
103
36
109
90
29
90
20
137
53
52
61
41
20
33
20
86
66
30
24
20
91
26
1
0
4
0
2
41
520
59
17
38
1
0
0
43
0
109
0
16
444
1
0
3
41
825
198
19
410
70
203
986
1
190
0
215
256
164
162
63
0
396
0
473
362
1
2
0
830
3
2258
1851
2162
2073
2214
2501
1497
2070
1717
2463
2107
2529
1783
1807
1942
2283
1523
2242
1523
1849
2464
1500
1558
1717
1737
2088
2456
2574
2016
1916
2993
2392
2736
2985
1988
3168
2009
2065
2035
2022
2472
2094
1662
2242
1552
1771
2216
2731
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
15
0
4
0
0
1
0
20
0
0
0
0
0
0
0
7
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
1
0
0
34
1
7
6
0
5
0
5
0
0
0
1
0
0
0
0
0
0
0
0
0
0
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
0
0
3
0
1
41
507
11
17
36
0
0
0
42
0
106
0
4
444
0
0
3
30
799
0
19
397
65
190
977
0
180
0
157
254
164
159
52
0
393
0
321
356
0
2
0
810
0
2269
1861
2181
2082
2224
2521
1506
2076
1740
2492
2122
2537
1793
1833
1951
2301
1533
2260
1542
1860
2473
1513
1570
1773
1764
2114
2453
2583
2060
1953
3002
2434
2746
3016
2002
3196
2045
2089
2045
2030
2480
2119
1690
2253
1567
1779
2245
2749
0
0
0
0
0
1
0
0
4
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
1
79
0
59
5
0
16
0
12
1
1
0
0
0
0
0
11
0
0
0
0
1
0
1
0
1
0
1
0
13
48
0
2
1
0
0
1
0
3
0
12
0
1
0
0
17
27
198
0
47
6
20
15
1
15
0
63
2
0
3
12
0
3
0
152
6
1
0
0
20
3
Totals
2148
7169
102573
47
66
24000
6540
102427
193
695
Experiments with Clinicians
To get a sense of the feasibility of using our approach in an actual clinical setting, we ran an experiment with two cardiologists and data from another cohort of patients admitted with NSTEACS.
The ECG tracings in this database, unlike those in the MITDB, are not particularly clean, i.e., they
contain a considerable amount of noise and many artifacts. This makes them more representative of
the data with which an algorithm in clinical use is likely to have to deal. We considered 4 randomly
chosen records, from a subset of patients who had experienced at least one episode of ventricular
tachycardia in the 7 day period following randomization. For each record, we consider the first
half-hour, giving us a test set of 8230 heartbeats.
In these experiments we used a slightly different stopping criterion developed earlier. As our algorithm chose beats to be labeled, each cardiologist was presented with an ECG plot of the heartbeat
to be labeled and the beats surrounding it, like the one shown in Figure 3. The cardiologist was
then asked to label it according to the following key: 1=clearly non-PVC , 2 = ambiguous non-PVC,
3=ambiguous PVC, 4=clearly PVC. Because the cardiologists made different choices about how
some beats should be labeled, one was asked to label an average of 15 beats/record and the other
roughly 20 beats/record. The whole process took each cardiologist about 90 seconds per record.
Since the records had not been previously labeled (and it seemed unreasonable to ask our experts
to label all of them), we used the PVC classification software from [6] to provide a label to which
7
1.5
1
mV
0.5
0
?0.5
?1
0
0.4 0.8 1.2 1.6
2
2.4 2.8 3.2 3.6
4
4.4
Time (s)
Figure 3: The classifiers trained using active learning both labeled the delineated beat delineated as a PVC,
whereas the rule-based algorithm labeled it as a non-PVC.
Table 5: Comparison of active earning using two different experts and Hamilton et al. Results are the sum
across four records.
All Records (8230 beats total)
Classifier
Size Training Data
TP
TN
Expert #1
60
191 8038
Expert #2
83
192 8035
0
190 8035
Hamilton et al
FP
0
3
3
FN
1
0
2
we could compare the labels generated by our method. This gave us three independently generated
labels for each beat. When all three classifiers agreed, we assumed that the beat was correctly
classified. Out of a possible 8230 disagreements there were only 6. We asked a third expert to
adjudicate all 6 disagreements, and used this as the gold standard to calculate the results for the
three classifiers shown in Table 5.
5
Summary & Conclusion
The goal of this work was to produce a clinically useful technique for automatically classifying
activity in ECG recordings. The problem is made challenging by the intra- and inter-patient differences present in the morphology and timing characteristics of the ECG produced by compromised
cardiovascular systems and by the variability in the classification tasks that a clinician might want to
perform. We propose to address these difficulties with a method for using active learning to perform
patient-adaptive and task-adaptive heartbeat classification.
When tested on the most widely used benchmark database of cardiologist annotated ECG recordings, our method had better performance than other recently proposed methods on the two primary
classification tasks recommended by AAMI. Additionally, our method required over 90% less training data than the methods to which it was compared. We also showed that our method compares
favorably to a state-of-the-art hand coded algorithm for a third common classification task.
To test out the practical applicability of our method, we conducted a small study with two cardiologists. Both cardiologists were able to use our tool with minimal training, and achieved excellent
classification results with a small amount of labor per record.
These preliminary results are highly encouraging, and suggest that active learning can be used practically in a clinical setting to not only reduce the labor cost but also garner additional improvements
in performance. Of course, there is still room for improvement. In all experiments we used identical
input parameters; further tuning of these parameters may improve results. However, in a clinical setting parameter tuning is impractical, and thus more work to investigate automated parameter tuning
is needed. Based on preliminary experiments we believe that by first learning the optimal number
of initial clusters for each record one can improve performance while decreasing the total number
of required labels. It may also be possible to further reduce the amount of required expert labor by
starting with a global classifier and then adapting it using active learning.
Acknowledgments
We would like to thank Benjamin Scirica, Collin Stultz, and Zeeshan Syed for sharing their expert
knowledge in cardiology and for their participation in our experiments. This work was supported in
part by the NSERC and by Quanta Computer Inc.
8
References
[1] D. V. Exner, K. M. Kavanagh, M. P. Slawnych et al, and for the REFINE Investigators. Noninvasive risk
assessment early after a myocardial infarction: The REFINE study. J Am Coll Cardiol, 50(24):2275?
2284, 2007.
[2] Z. Syed, B. Scirica, S. Mohanavel, P. Sung, C. Cannon, P. Stone, C. Stultz, and J. V. Guttag. Relation to
death within 90 days of non-st-elevation acute coronary syndromes to variability in electrocardiographic
morphology. Am J of Cardiol, 103(3), 2009.
[3] P. de Chazal and R. B. Reilly. A Patient-Adapting Heartbeat Classifier Using ECG Morphology and
Heartbeat Interval Features. Biomedical Engineering, IEEE Transactions on, 53(12):2535?2543, Dec.
2006.
[4] Y. H. Hu, S. Palreddy, and W.J. Tompkins. A Patient-Adaptable ECG Beat Classifier Using a Mixture of
Experts Approach. Biomedical Engineering, IEEE Transactions on, 44(9):891?900, Sept. 1997.
[5] T. Ince, S. Kiranyaz, and M. Gabbouj. A generic and robust system for automated patient-specific classification of ecg signals. IEEE Transactions on Biomedical Engineering, 56(5), May 2009.
[6] P. Hamilton. Open Source ECG Analysis. In Computers in Cardiology, volume 29, pages 101?104, 2002.
[7] J. Bigger, F. Dresdale, and R. Heissenbuttel et. al. Ventricular arrhythmias in ischemic heart disease:
mechanism, prevalence, significance, and management. Prog Cardiovasc Dis, 19:255, 1977.
[8] T. Smilde, D. van Veldhuisen, and M. van den Berg. Prognostic value of heart rate variability and ventricular arrhythmias during 13-year follow up in patients with mild to moderate heart failure. Clinical
Research in Cardiology, 98(4):233?239, 2009.
[9] A. L. Goldberger, L. A. N. Amaral, and L. Glass et al. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation, 101(23):e215?e220,
2000 (June 13). Circulation Electronic Pages: http://circ.ahajournals.org/cgi/content/full/101/23/e215.
[10] P. de Chazal, M. O?Dwyer, R. B. Reilly, and Senior Member. Automatic Classification of Heartbeats
Using ECG Morphology and Heartbeat Interval Features. IEEE Transactions on Biomedical Engineering,
51:1196?1206, 2004.
[11] K. Sternickel. Automatic pattern recognition in ecg time series. In Computer Methods and Programs in
Biomedicine, Vol: 68, pages 109?115, 2002.
[12] Z. Syed, J. Guttag, and C. Stultz. Clustering and Symbolic Analysis of Cardiovascular Signals: Discovery
and Visualization of Medically Relevant Patterns in Long-term Data Using Limited Prior Knowledge.
EURASIP Journal on Advances in Signal Processing, 2007:97?112, 2007.
[13] S. Tong and D. Koller. Support vector machine active learning with applications to text classification.
Journal of Machine Learning Research, 2:45?66, 2002.
[14] S. Dasgupta and D. Hsu. Hierarchical sampling for active learning. In ICML ?08: Proceedings of the 25th
international conference on Machine learning, pages 208?215, New York, NY, USA, 2008. ACM.
[15] Z. Xu, K. Yu, V. Tresp, X. Xu, and J. Wang. Representative sampling for text classification using support
vector machines. In Proceedings of the twenty-fifth European Conference on Information Retrieval, pages
393?407. Springer, 2003.
[16] H.T. Nguyen and A. Smeulders. Active learning using pre-clustering. In Proceedings of the twenty-first
international conference on Machine learning, page 79, New York, NY, USA, 2004. ACM.
[17] J. H. Ward. Hierarchical grouping to optimize an objective function. Journal of the American Statistical
Association, 58(301):234?244, 1963.
[18] S. Kamvar, D. Klein, and C. Manning. Interpreting and extending classical agglomerative clustering
algorithms using a model-based approach. In Proceedings of nineteenth International Conference on
Machine Learning, pages 283?290, 2002.
[19] T. Joachims. Making Large-scale Support Vector Machine Learning Practical. MIT Press, Cambridge,
MA, USA, 1999.
[20] M. Sokolova, N. Japkowicz, and S. Szpakowicz. Beyond Accuracy, F-score and ROC: a Family of Discriminant Measures for Performance Evaluation, volume 4304 of Lecture Notes in Computer Science,
pages 1015?1021. Springer Berlin/Heidelberg, 2006.
9
| 4091 |@word mild:1 longterm:1 e215:2 prognostic:1 open:1 hu:2 contraction:3 decomposition:1 harder:1 initial:5 contains:3 series:2 score:9 selecting:1 outperforms:5 current:3 com:1 goldberger:1 john:1 fn:3 benign:1 hypothesize:1 designed:3 plot:1 v:5 alone:1 half:2 pacemaker:2 advancement:2 selected:4 fewer:3 spec:1 record:41 sudden:1 detecting:6 provides:1 node:3 org:1 five:1 fitting:1 combine:2 headed:2 manner:1 inter:3 roughly:2 arrhythmia:5 x60:1 morphology:8 multi:1 decreasing:1 automatically:1 little:3 actual:2 unpredictable:1 encouraging:1 begin:3 moreover:1 kind:1 developed:5 finding:1 impractical:1 veb:4 sung:1 every:1 classifier:36 originates:1 medical:2 hamilton:6 producing:3 cardiovascular:4 before:3 positive:3 engineering:4 timing:4 local:3 tends:3 despite:1 merge:1 approximately:2 might:3 chose:2 ecg:37 suggests:1 challenging:3 limited:3 electrocardiographic:1 practical:2 acknowledgment:1 testing:4 practice:3 prevalence:1 evolving:1 adapting:4 reilly:2 pre:8 specificity:1 suggest:1 cardiology:3 get:1 symbolic:1 unlabeled:3 selection:1 close:1 risk:4 applying:1 optimize:1 starting:1 independently:1 rule:8 his:1 x64:1 variation:2 user:2 exact:1 densest:1 us:2 hypothesis:1 element:1 physiotoolkit:1 recognition:1 particularly:3 lay:1 database:7 labeled:27 electrical:2 wang:1 thousand:1 calculate:1 episode:1 morphological:3 removed:1 ran:3 disease:1 benjamin:1 asked:3 dynamic:1 trained:9 depend:1 solving:1 segment:2 predictive:1 heartbeat:35 completely:1 represented:2 surrounding:1 train:3 describe:2 effective:1 artificial:1 query:9 labeling:3 outcome:1 choosing:3 quite:1 widely:3 nineteenth:1 circ:1 s:6 ability:1 ward:4 online:1 advantage:2 rr:5 sen:1 took:1 propose:1 relevant:1 combining:1 poorly:1 gold:1 description:1 getting:1 qr:2 cluster:23 empty:1 supra:1 extending:1 produce:4 object:2 help:1 depending:2 develop:2 measured:1 qt:1 implemented:1 indicate:1 physiobank:1 annotated:2 modifying:1 require:5 assign:2 preliminary:2 randomization:1 elevation:1 practically:1 considered:1 normal:6 recalculated:1 achieves:3 vary:2 smallest:1 early:1 outperformed:1 label:9 healthy:1 successfully:2 tool:2 mit:7 clearly:2 atrium:1 cannon:1 focus:1 june:1 joachim:1 improvement:2 aka:2 greatly:1 centroid:3 baseline:1 detect:2 glass:1 sense:2 am:2 stopping:3 unlikely:1 initially:2 her:1 relation:1 koller:1 misclassified:1 japkowicz:1 overall:2 classification:44 among:1 priori:1 art:2 field:1 once:2 extraction:2 having:1 sampling:4 identical:3 look:2 icml:1 yu:1 myocardial:2 report:2 serious:1 few:3 randomly:2 resulted:1 individual:2 attempt:1 interest:1 highly:3 threatening:2 intra:4 investigate:1 evaluation:2 severe:4 mixture:2 yielding:1 held:3 accurate:2 capable:2 re:2 minimal:1 increased:2 column:2 earlier:1 tp:4 measuring:1 cost:5 applicability:1 subset:1 rare:1 hundred:1 recognizing:1 conducted:1 too:1 reported:5 varies:1 sv:1 considerably:3 combined:1 st:1 peak:3 sensitivity:1 international:3 csail:4 physician:3 pool:1 continuously:1 quickly:1 daubechies:1 recorded:1 management:1 containing:2 scirica:2 worse:1 creating:1 expert:12 american:1 actively:1 potential:1 exclude:1 de:5 electrocardiogram:2 coefficient:1 inc:1 hasn:1 wiens:1 mv:4 performed:2 analyze:1 portion:3 wave:3 start:2 smeulders:1 square:3 accuracy:2 circulation:2 variance:3 characteristic:3 who:2 yield:2 identify:3 garner:1 accurately:1 produced:2 researcher:3 classified:1 biomedicine:1 detector:1 chazal:6 suffers:1 collin:1 sharing:1 failure:1 energy:1 frequency:1 attributed:1 stop:4 sampled:1 hsu:1 popular:1 ask:1 recall:1 knowledge:4 amplitude:3 agreed:1 adaptable:1 higher:1 supervised:2 day:2 follow:1 done:1 furthermore:2 biomedical:4 until:1 hand:3 assessment:1 defines:1 artifact:1 impulse:1 believe:2 usa:3 effect:2 omitting:1 contain:8 normalized:2 true:1 excluded:1 death:2 deal:1 during:1 ambiguous:2 transferable:1 rhythm:5 sokolova:1 criterion:5 stone:1 tn:3 ince:5 interpreting:1 passive:8 pvc:16 novel:1 recently:5 common:5 volume:2 association:3 belong:1 discussed:1 cambridge:1 queried:2 automatic:3 tuning:4 similarly:1 had:7 e220:1 acute:2 surface:1 etc:1 add:2 mlight:1 closest:3 imbalanced:1 own:1 showed:1 moderate:2 instrumentation:2 binary:2 success:3 life:2 seen:1 additional:2 employed:1 freely:1 syndrome:1 period:1 recommended:2 signal:8 ii:1 full:1 reduces:1 segmented:1 clinical:9 long:3 cross:1 retrieval:2 post:2 coded:2 halt:1 feasibility:1 impact:3 bigger:1 patient:56 metric:1 sinus:5 iteration:3 achieved:1 cell:2 dec:1 background:2 whereas:3 want:1 kamvar:1 interval:11 else:1 median:2 source:2 suffered:1 crucial:1 biased:2 unlike:2 recording:12 tend:1 hz:3 member:1 extracting:1 abnormality:4 cohort:1 automated:4 variety:2 xj:1 gave:1 reduce:3 idea:1 utility:1 linkage:10 effort:1 york:2 matlab:1 dramatically:1 useful:4 generally:2 clear:1 se:6 tune:1 physiologic:1 amount:4 processed:2 svms:2 http:1 per:5 correctly:2 klein:1 dasgupta:1 vol:1 key:1 four:3 monitor:1 clean:1 kept:1 sum:4 year:1 prog:1 throughout:2 almost:1 family:1 electronic:1 earning:1 decision:2 abnormal:3 resampled:1 followed:1 paced:2 distinguish:1 refine:2 activity:3 adapted:2 dangerous:1 placement:1 handful:1 rulebased:1 worked:1 software:3 ventricular:10 simulate:1 medically:1 passively:1 separable:1 according:1 combination:1 clinically:2 manning:1 belonging:2 across:6 cardiac:4 inflexible:1 qrs:1 terminates:1 slightly:1 infarction:2 delineated:2 making:1 outlier:1 intuitively:1 den:1 heart:4 equation:1 resource:1 previously:1 visualization:1 discus:1 mechanism:1 needed:1 mind:1 end:3 tompkins:1 available:1 unreasonable:1 apply:2 hierarchical:7 generic:1 disagreement:2 distinguished:1 clustering:12 remaining:1 medicine:1 giving:1 physionet:2 classical:1 warping:1 objective:1 question:1 primary:2 distance:6 thank:1 berlin:1 cgi:1 nx:2 prospective:1 originate:3 agglomerative:1 collected:1 discriminant:1 reason:1 toward:1 guttag:4 cardiol:2 difficult:1 unfortunately:1 favorably:1 stultz:3 design:1 twenty:3 perform:3 imbalance:4 benchmark:3 beat:62 variability:3 prematurely:1 pair:1 required:4 specified:1 smilde:1 hour:3 address:3 able:1 beyond:1 below:1 pattern:2 atrial:3 fp:3 program:1 including:1 max:2 unrealistic:1 syed:3 difficulty:2 ppv:5 participation:1 scheme:2 improve:2 brief:1 sept:1 tresp:1 text:2 prior:1 literature:2 discovery:1 wander:1 lecture:1 coronary:1 querying:1 versus:1 validation:1 gather:1 ventricle:1 bih:2 classifying:3 row:1 course:2 summary:1 repeat:1 last:3 supported:1 dis:1 senior:1 understand:1 fifth:1 tracing:1 van:2 boundary:2 noninvasive:1 xn:1 seemed:1 conservatively:1 quantum:1 made:5 adaptive:13 commonly:1 coll:1 premature:6 nguyen:1 far:1 amaral:1 transaction:4 emphasize:1 unreliable:1 global:11 active:20 investigating:1 sveb:1 assumed:1 ischemic:1 xi:1 disrupt:1 compromised:2 table:13 additionally:2 reasonably:1 robust:1 heidelberg:1 excellent:1 complex:2 european:1 constructing:1 tachycardia:1 sp:4 significance:1 main:2 linearly:1 incrementing:1 noise:1 whole:1 complementary:1 body:1 augmented:1 x1:2 representative:4 join:1 xu:2 roc:1 attributable:1 tong:1 ny:2 precision:3 experienced:1 lie:4 third:2 wavelet:3 dozen:1 minute:3 specific:7 symbol:1 experimented:1 physiological:3 svm:13 fusion:2 incorporating:1 grouping:1 margin:7 led:1 distinguishable:1 admitted:1 likely:4 labor:5 contained:2 nserc:1 springer:2 gabbouj:1 cardiologist:16 acm:2 ma:1 marked:2 goal:4 towards:1 room:1 considerable:1 adverse:1 change:3 content:1 typical:2 clinician:6 specifically:1 eurasip:1 total:4 accepted:1 tendency:1 partly:2 ectopic:6 exception:1 berg:1 support:3 investigator:1 tested:5 |
3,414 | 4,092 | Structural epitome: A way to summarize one?s visual
experience
Nebojsa Jojic
Microsoft Research
Alessandro Perina
Microsoft Research
University of Verona
Vittorio Murino
Italian Institute of Technology
University of Verona
Abstract
In order to study the properties of total visual input in humans, a single subject
wore a camera for two weeks capturing, on average, an image every 20 seconds.
The resulting new dataset contains a mix of indoor and outdoor scenes as well
as numerous foreground objects. Our first goal is to create a visual summary
of the subject?s two weeks of life using unsupervised algorithms that would automatically discover recurrent scenes, familiar faces or common actions. Direct
application of existing algorithms, such as panoramic stitching (e.g., Photosynth)
or appearance-based clustering models (e.g., the epitome), is impractical due to
either the large dataset size or the dramatic variations in the lighting conditions.
As a remedy to these problems, we introduce a novel image representation, the
?structural element (stel) epitome,? and an associated efficient learning algorithm.
In our model, each image or image patch is characterized by a hidden mapping T
which, as in previous epitome models, defines a mapping between the image coordinates and the coordinates in the large ?all-I-have-seen? epitome matrix. The
limited epitome real-estate forces the mappings of different images to overlap
which indicates image similarity. However, the image similarity no longer depends on direct pixel-to-pixel intensity/color/feature comparisons as in previous
epitome models, but on spatial configuration of scene or object parts, as the model
is based on the palette-invariant stel models. As a result, stel epitomes capture
structure that is invariant to non-structural changes, such as illumination changes,
that tend to uniformly affect pixels belonging to a single scene or object part.
1
Introduction
We develop a novel generative model which combines the powerful invariance properties achieved
through the use of hidden variables in epitome [2] and stel (structural element) models [6, 8]. The
latter set of models have a hidden stel index si for each image pixel i. The number of discrete states
si can take is small, typically 4-10, as the stel indices point to a small palette of distributions over local measurements, e.g., color. The actual local measurement xi (e.g. color) for pixel i is assumed to
have been generated from the appropriate palette entry. This constrains the pixels with the same stel
index s to have similar colors or whatever local measurements xi represent. The indexing scheme is
further assumed to change little accross different images of the same scene/object, while the palettes
can vary significantly. For example, two images of the same scene captured in different levels of
overall illumination would still have very similar stel partitions, even though their palettes may be
vastly different. In this way, the image representation rises above a matrix of local measurements in
favor of a matrix of stel indices which can survive remarkable non-structural image changes, as long
as these can be explained away by a change in the (small) palette. For example, in Fig. 1B, images
of pedestrians are captured by a model that has a prior distribution of stel assignments shown in the
first row. The prior on stel probabilities for each pixel adds up to one, and the 6 images showing
these prior probabilities add up to a uniform image of ones. Several pedestrian images are shown
1
A) GRAPHICAL MODELS
B) PROBABILISTIC INDEX MAP
i
i
p(S )
S
e
?
S
?
t
T
X
X
Epitome
PIM
C) STEL EPITOME
Stel 2
Stel 3
Stel 4
Stel 5
Stel 6
q(s=1)
q(s=2)
q(s=3)
q(s=4)
q(s=5)
q(s=6)
p(s)
e
T
i
Stel 1
Palette
?
Palette
?
Palette
?
X
t
Stel Epitome
q(s=3)
q(s=4)
q(s=1)
q(s=2)
q(s=3)
q(s=4)
q(s=1)
q(s=2)
q(s=3)
q(s=4)
Inference
x
G) REGULAR
EPITOME [2]
q(s=2)
q(s=1)
e(s)
D) FOUR
FRAMES
F) ALIGNMENT
OF INTENSITY
IMAGES
s=4
s=2
E) ALIGNMENT
WITH STEL
s=3
s=1
Figure 1: A) Graphical model of Epitome, Probabilistic index map (PIM) and Stel epitome. B)
Examples of PIM parameters. C) Example of stel epitome parameters. D) Four frames aligned with
stel epitome E-F). In G) we show the original epitome model [2] trained on these four frames.
below with their posterior distributions over stel assignments, as well as the mean color of each stel.
This illustrates that the different parts of the pedestrian images are roughly matched. Torso pixels,
for instance, are consistently assigned to stel s = 3, despite the fact that different people wore shirts
or coats of very different colors. Such a consistent segmentation is possible because torso pixels
tend to have similar colors within any given image and because the torso is roughly in the same
position across images (though misalignment of up to half the size of the segments is largely tolerated). While the figure shows the model with S=6 stels, larger number of stels were shown to lead
to further segmentation of the head and even splitting of the left from right leg [6]. Motivated by
similar insights as in [6], a number of models followed, e.g. [7, 13, 14, 8], as the described addition
of hidden variables s achieves the remarkable level of intensity invariance first demonstrated through
the use of similarity templates [12], but at a much lower computational cost.
In this paper, we embed the stel image representation within a large stel epitome: a stel prior matrix,
like the one shown in the top row of Fig. 1B, but much larger so that it can contain representations
of multiple objects or scenes. This requires the additional transformation variables T for each image
whose role is to align it with the epitome. The model is thus qualitatively enriched in two ways: 1)
the model is now less sensitive to misalignment of images, as through alignment to the epitome, the
images are aligned to each other, and 2) interesting structure emerges when the epitome real estate
is limited so that though it is much larger than the size of a single image, it is till much smaller
than the real estate needed to simply tile all images without overlap. In that case, a large collection
of images must naturally undergo an unsupervised clustering in order for this real estate to be used
as well as possible (or as well as the local minimum obtained by the learning algorithm allows).
This clustering is quite different from the traditional notion of clustering. As in the original epitome
models, the transformation variables play both the alignment and cluster indexing roles. Different
2
models over the typical scenes/objects have to compete over the positions in the epitome, with a
panoramic version of each scene emerging in different parts of the epitome, finally providing a rich
image indexing scheme. Such a panoramic scene submodel within the stel epitome is illustrated
in Fig. 1C. A portion of the larger stel epitome is shown with 3 images that map into this region.
The region represents one of two home offices in the dataset analyzed in the experiments. Stel s=1
captures the laptop screen, while the other stels capture other parts of the scene, as well as large
shadowing effects (while the overall changes in illumination and color changes in object parts rarely
affect stel representations, the shadows can break the stel invariance, and so the model learned to
cope with them by breaking the shadows across multiple stels). The three images shown, mapping
to different parts of this region, have very different colors as they were taken at different times of
day and across different days, and yet their alignment is not adversely affected, as it is evident in
their posterior stel segmentation aligned to the epitome.
To further illustrate the panoramic alignment, we used the epitome mapping to show for the 4 different images in Fig. 1D how they overlap with stel s=4 of another office image (Fig. 1E), as well as
how multiple images of this scene, including these 4, look when they are aligned and overlapped as
intensity images in Fig. 1F. To illustrate the gain from palette-invariance that motivated this work,
we show in Fig. 1G the original epitome model [2] trained on images of this scene. Without the
invariances afforded by the stel representation, the standard color epitome has to split the images of
the scene into two clusters, and so the laptop screen is doubled there.
Qualitatively quite different from both epitomes and previous stel models, the stel epitome is a
model flexible enough to be applied to a very diverse set of images. In particular, we are interested
in datasets that might represent well a human?s total visual input over a longer period of time,
and so we captured two weeks worth of SenseCam images, taken at a frequency of roughly one
image every 20 seconds during all waking hours of a human subject over a period of two weeks
(www.research.microsoft.com/?jojic/aihs).
2
Stel epitome
The graphical model describing the dependencies in stel epitomes is provided in Fig. 1A. The parametric forms for the conditional distributions are standard multinomial and Gaussian distributions
just as the ones used in [8]. We first consider the generation of a single image or an image patch (depending on which visual scale we are epitomizing), and, for brevity, temporarily omit the subscript
t indexing different images.
The epitome is a matrix of multinomial distributions over S indices s ? {1, 2, ..., S}, associated
with each two-dimensional epitome location i:
p(si = s) = ei (s).
(1)
Thus each location in the epitome contains S probabilities (adding to one) for different indices.
Indices for the image are assumed to be generated from these distributions. The distribution over
the entire collection of pixels (either from an entire image, or a patch), p({xi }|{si }, T, ?), depends
on the parametrization of the transformations T . We adopt the discrete transformation model used
previously in graphical models e.g. [1, 2], where the shifts are separated from other transformations
such as scaling or rotation, T = (`, r), with ` being a 2-dimensional shift and r being the index into
the set of other transformations, e.g., combinations of rotation and scaling:
Y
Y
p({xi }|{si }, T, ?) =
p(xri?` |si , ?) =
p(xri?` |?si ),
(2)
i
i
where superscript r indicates transformation of the image x by the r-th transformation, and i ? ` is
the mod-difference between the two-dimensional variables with respect to the edges of the epitome
(the shifts wrap around). ? is the palette associated with the image, and ?s is its s ? th entry.
Various palette models for probabilistic index / structure element map models have been reviewed
in [8]. For brevity, in this paper we focus on the simplest case where the image measurements are
simply pixel colors, and the palette entries are simply Gaussians with parameters ?s = (?s , ?s ).
In this case, p(xri?` |?si ) = N (xri?` ; ?si , ?si ), and the joint likelihood over observed and hidden
variables can be written as
YY
[si =s]
P = p(?)p(`, r)
N (xri?` ; ?s , ?s )ei (s)
,
(3)
i
s
3
where [] is the indicator function.
To derive the inference and leaning algorithms
P for Qthe mode, we start with a posterior distribution
model Q and the appropriate free energy
Q log P . The standard variational approach, however,
is not as straightforward as we might hope as major obstacles need to be overcome to avoid local
minima and slow convergence. To focus on these important issues, we further simplify the problem
and omit both the non-shift part of the transformations (r) and palette priors p(?), and for consistency, we also omit these parts of the model in the experiments. These two elements of the model
can be dealt with in the manner proposed previously: The R discrete transformations (scale/rotation
combinations, for example) can be inferred in a straight-forward way that makes the entire algorithm
that follows R times slower (see [1] for using such transformations in a different context), and the
various palette models from [8] can all be inserted here with the update rules adjusted appropriately.
A large stel epitome is difficult to learn because decoupling of all hidden variables in the posterior leads to severe local minima, with all images either mapped to a single spot in the epitome,
or mapped everywhere in the epitome so that the stel distribution is flat. This problem becomes
particularly evident in larger epitomes, due to the imbalance in the cardinalities of the three types of
hidden variables. To resolve this, we either need a very high numerical precision (and considerable
patience), or the severe variational approximations need to be avoided as much as possible. It is
indeed possible to tractably use a rather expressive posterior
Y
Y
Q = q(`)
q(?s |`)
q(si ),
(4)
s
i
further setting q(?s |`) = ?(?s ? ?
?s,` )?(?s ? ??s,` ), where ? is the Dirac function. This leads to
F
= H(Q) +
X
q(`)q(si = s)
s,`,i
+
X
q(`)q(si = s)
s,`,i
X
x2i?`
?
?s,` xi?`
?
q(`)q(si = s)
+
?
2?s,`
??s,`
s,`,i
?
?2s,`
2??s,`
?
XX
s
q(si = s) log ei (s),
(5)
i
where H(Q) is the entropy of the posterior distribution. Setting to zero the derivatives of this free
energy with respect to the variational parameters ? the probabilities q(si = s), q(`), and the palette
means and variance estimates ?
?s,` , ??s,` ? we obtain a set of updates for iterative inference.
2.1
E STEP
The following steps are iterated for a single image x on an m ? n grid and for a given epitome
distributions e(s) on an M ? N grid. Index i corresponds to the epitome coordinates and masks
m are used to describe which of all M ? N coordinates correspond to image coordinates. In the
variational EM learning on a collection of images index by t, these steps are done for each image,
yielding posterior distributions indexed by t and then the M step is performed as described below.
We initialize q(si = s) = e(si ) and then iterate the following steps in the following order.
Palette updates
P P
`
i mi?` q(si = s)q(`)xi?`
P
= P
`
i q(si = s)q(`)mi?`
(6)
P P
2
`
i mi?` q(si = s)q(`)xi?`
P
P
??
?2s,`
=
q(s
=
s)q(`)m
i
i?`
`
i
(7)
?
?s,`
??s,`
Epitome mapping update
log q(`) = const +
1X
q(sti = s) log 2??i?`
2 i,s
(8)
This update is derived from the free energy and from the expression for ? above). This equation
can be used as is when the epitome e(s) is well defined (that is the entropy of component stel
4
distribution is low in the latter iterations), as long as the usual care is taken in exponentiation before
normalization - the maximum log q(`) should be subtracted from all elements of the M ? N matrix
log q(`) before exponentiation.
In the early iterations of EM, however, when distributions ei (s) have not converged yet, numerical
imprecision can stop the convergence, leaving the algorithm at a point which is not even a local
minimum. The reason for this is that after the normalization step we described, q(`) will still be very
peaky, even for relatively flat e(s) due to the large number of pixels in the image. The consequence is
that low alignment probabilities are rounded down to zero, as after exponentiation and normalization
their values go below numerical precision. If there are areas of the epitome where no single image is
mapped with high probability, then the update in those areas in the M step would have to depend on
the low-probability mappings for different images, and their relative probabilities would determine
which of the images contribute more and which less to updating these areas of the epitome. To
preserve the numerical precision needed for this, we set k thresholds ?k , and compute log q?(`)k , the
distributions at the k different precision levels:
log q?(`)k = [log q(`) ? ?k ] ? ?k + [log q(`) < ?k ] ? log q(`),
where [] is the indicator function. This limits how high the highest probability in the map is allowed
to be. The k ? th distribution sets all values above ?k to be equal to ?k .
We can now normalize these k distributions as discussed above:
exp {log q?(`)k ? maxi log q?(`)k }
q?(`)k = P
?(`)k ? maxi log q?(`)k }
` exp {log q
To keep track of which precision level is needed for different `, we calculate the masks
X
m
? i,k =
q?(`)k ? mi?` ,
`
where mask m is the mask discussed in the main text with ones in the upper left corner?s m ? n
entries and zeros elsewhere, designating the default image position for a shift of ` = 0 (or given that
shifts are defined with a wrap-around, the shift of ` = (M, N )). Masks m
? i,k provide total weight of
the image mapping at the appropriate epitome location at different precision levels.
Posterior stel distribution q(s) update at multiple precision levels
log q?(si = s)k =
const ?
X X
`
?
i|i?`?C
q?(`)k
X X
x2i?`
?
?s,` xi?`
+
?
q?(`)k
2??s,`
??s,`
`
i|i?`?C
X X
`
q?(`)k
?
?2s,`
2??s,`
i|i?`?C
+m
? i,k ? log e(si = s).
(9)
To keep track of these different precision levels, we also define a mask M so that Mi = k indicates
that the k-th level of detail should be used for epitome location i. The k-th level is reserved for those
locations that have only the values from up to the k-th precision band of q(`) mapped there (we will
have m ? n mappings of the original image to each epitome location, as this many different shifts
will align the image so as to overlap with any given
P epitome location). One simple, though not most
efficient way to define this matrix is Mi = 1 + b k m
? i,k c.
We now normalize log q?(si = s)k to compute the distribution at k different precision levels, q?(si =
s)k , and
P compute q(s) integrating the results from different numerical precision levels as q(si =
s) = k [Mi = k] ? q?(si = s)k .
2.2
M STEP
The highest k for each epitome location Di = maxt {Mit }, is determined over all images xt in
the dataset, so that we know the appropriate precision level at which to perform summation and
normalization. Then the epitome update consists of:
P
X
[M t = k] ? q t (si )
e(si = s) =
[Di = k] t P
.
t
t [M = k]
k
5
Bike
Kitchen
Car
Work
office
Dining
room
Outside
home
Home
office
Tennis
field
Laptop
room
Living
room
Figure 2: Some examples from the dataset (www.resaerch.microsoft.com/?jojic/aihs)
Note that most of the summations can be performed by convolution operations and as the result, the
complexity of the algorithm is of the O(SM N log M N ) for M X N epitomes.
3
Experiments
Using a SenseCam wearable camera, we have obtained two weeks worth of images, taken at the rate
of one frame every 20 seconds during all waking hours of a human subject. The resulting image
dataset captures the subject?s (summer) life rather completely in the following sense: Majority of
images can be assigned to one of the emergent categories (Fig. 2) and the same categories represent
the majority of images from any time period of a couple of days. We are interested in appropriate
summarization, browsing, and recognition tasks on this dataset. This dataset also proved to be
fundamental for testing stel epitomes, as the illumination and viewing angle variations are significant
across images and we found that the previous approaches to scene recognition provide only modest
recognition rates. For the purposes of evaluation, we have manually labeled a random collection
of 320 images and compared our method with other approaches on supervised and unsupervised
classification. We divided this reduced dataset in 10 different recurrent scenes (32 images per class);
some examples are depicted in Fig. 2. In all the experiments with the reduced dataset we used an
epitome area 14 times larger than the image area and five stels (S=5). The numerical results reported
in the tables are averaged over 4 train/test splits.
In supervised learning the scene labels are available during the stel epitome learning. We used this
information to aid both the original epitome [9] and the stel epitome modifying the models by the
addition of an observed scene class variable c in two ways: i) by linking c in the Bayesian network
with e, and so learning p(e|c), and ii) by linking c with T inferring p(T |c). In the latter strategy,
where we model p(T |c), we learn a single epitome, but we assume that the epitome locations are
linked with certain scenes, and this mapping is learned for each epitome pixel. Then, the distribution
p(c|`) over scene labels can be used for inference of the scene label for the test data. For a previt
ously unseen
is achieved by computing the label posterior p(ct |xt ) using
Ptest image x , recognition
t t
t
p(c |x ) = ` p(c|`) ? p(`|x ).
We compared our approach with the epitomic location recognition method presented in [9], with
Latent Dirichlet allocation (LDA) [4], and with the Torralba approach [11]. We also compared
with baseline discriminative classifiers and with the pyramid matching kernel approach [5], using
SIFT features [3]. For the above techniques that are based on topic models, representing images
as spatially disorganized bags of features, the codebook of SIFT features was based 16x16 pixel
patches computed over a grid spaced by 8 pixels. We chose a number of topics Z = 45 and 200
codewords (W = 200). The same quantized dictionary has been employed in [5].
To provide a fair comparison between generative and discriminative methods, we also used the
free energy optimization strategy presented in [10], which provides an extra layer of discriminative
training for an arbitrary generative model. The comparisons are provided in Table 1. Accuracies
achieved using the free energy optimization strategy [10] are reported in the Opt. column.
6
Table 1: Classification accuracies.
Method
Stel epitome
Stel epitome
Epitome [9]
Epitome [9]
p(T |c)
p(e|c)
p(T |c)
p(e|c)
Accuracy
70,06%
88,67%
74,36%
69,80%
[10] Opt.
n.a.
98,70%
n.a.
79,14%
Method
LDA [4]
GMM [11]
SIFT + K-NN
[5]
C=3
Accuracy
74,23%
56,81%
79,42%
96,67%
[10] Opt.
80,11%
n.a.
n.a.
n.a.
We also trained both the regular epitome and the stel epitome in an unsupervised way. An illustration of the resulting stel epitome is provided in Fig. 3. The 5 panels marked s = 1, . . . , 5 show the
stel epitome distribution. Each of these panels is an image ei (s) for an appropriate s. On the top
of the stel epitome, four enlarged epitome regions are shown to highlight panoramic reconstructions
of a few classes. We also show the result of averaging all images according to their mapping to the
stel epitome (Fig. 3D) for comparison with the traditional epitome (Fig.3C) which models colors
rather than stels. As opposed to the stel epitome, the learned color epitome [2] has to have multiple
versions of the same scene in different illumination conditions. Furthermore, many different scenes
tend to overlap in the color epitome, especially indoor scenes which all look equally beige. Finally,
in Fig. 3B we show examples of some images of different scenes mapped onto the stel epitome,
whose organization is illustrated by a rendering of all images averaged into the appropriate location
(similarly to the original color epitomes). Note that the model automatically clusters images using
the structure, and not colors, even in face of variation of colors present in the exemplars of the ?Car?,
or the ?Work office? classes (See also the supplemental video that illustrates the mapping dynamically). The regular epitome cannot capture these invariances, and it clusters images based on overall
intensity more readily than based on the structure of the scene. We evaluated the two models numerically in the following way. Using the two types of unsupervised epitomes, and the known labels
for the images in the training set, we assigned labels to the test set using the same classification rule
explained in the previous paragraph. This semi-supervised test reveals how consistent the clustering
induced by epitomes is with the human labeling. The stel epitome accuracy, 73,06%, outperforms
the standard epitome model [9], 69,42%, with statistical significance.
We have also trained both types of epitomes over a real estate 35 times larger than the original image
size using different random sets of 5000 images taken from the dataset. The stel epitomes trained in
an unsupervised way are qualitatively equivalent, in that they consistently capture around six of the
most prominent scenes from Fig. 2, whereas the traditional epitomes tended to capture only three.
4
Conclusions
The idea of recording our experiences is not new. (For a review and interesting research directions
see [15]). It is our opinion that recording, summarizing and browsing continuous visual input is particularly interesting. With the recent substantial increases in radio connectiviy, battery life, display
size, and computing power of small devices, and the avilability of even greater computing power
off line, summarizing one?s total visual input is now both a practically feasible and scientifically
interesting target for vision research. In addition, a variety of applications may arise once this functionality is provided. As a step in this direction, we provide a new dataset that contains a mix of
indoor and outdoor scenes as a result of two weeks of continuous image acquisition, as well as a
simple algorithm that deals with some of the invariances that have to be incorporated in a model of
such data. However, it is likely that modeling the geometry of the imaging process will lead to even
more interesting results. Although straightforward application of panoramic stitching algorithms,
such as Photosynth, did not work on this dataset, because of both the sheer number of images and
the significant variations in the lighting conditions, such methods or insights from their development
will most likely be very helpful in further development of unsupervised learning algorithms for such
types of datasets. The geometry constraints may lead to more reliable background alignments for
the next logical phase in modeling for ?All-I-have-seen? datasets: The learning of the foreground
object categories such as family members? faces. As this and other such datasets grow in size, the
unsupervised techniques for modeling the data in a way where interesting visual components emerge
over time will become both more practically useful and scientifically interesting.
7
Figure 3: Stel epitome of images captured by a wearable camera
8
s=1
s=2
B) IMAGE MAPPINGS ON THE STEL EPITOME
Work office stel-panorama
Car stel-panorama
10,4 pt
A) STEL EPITOME
C) EPITOME
Home office stel-panorama
Kitchen stel-panorama
D) STEL EPITOME
RECONSTRUCTION
References
[1] B. Frey and N. Jojic, ?Transformation-invariant clustering using the EM algorithm ?, TPAMI
2003, vol. 25, no. 1, pp. 1-17.
[2] N. Jojic, B. Frey, A. Kannan, ?Epitomic analysis of appearance and shape?, ICCV 2003.
[3] D. Lowe, ?Distinctive Image Features from Scale-Invariant Keypoints,? IJCV, 2004, vol. 60,
no. 2, pp. 91-110.
[4] L. Fei-Fei, P. Perona, ?A Bayesian Hierarchical Model for Learning Natural Scene Categories,?
IEEE CVPR 2005, pp. 524-531.
[5] S. Lazebnik, C. Schmid, J. Ponce, ?Beyond Bags of Features: Spatial Pyramid Matching for
Recognizing Natural Scene Categories,? IEEE CVPR, 2006, pp. 2169-2178.
[6] N. Jojic and C. Caspi, ?Capturing image structure with probabilistic index maps,? IEEE CVPR
2004, pp. 212-219.
[7] J. Winn and N. Jojic, ?LOCUS: Learning Object Classes with Unsupervised Segmentation?
ICCV 2005.
[8] N. Jojic, A.Perina, M.Cristani, V.Murino and B. Frey, ?Stel component analysis: modeling
spatial correlation in image class structure,? IEEE CVPR 2009.
[9] K. Ni, A. Kannan, A. Criminisi and J. Winn, ?Epitomic Location Recognition,? IEEE CVPR
2008.
[10] A. Perina, M. Cristani, U. Castellani, V. Murino and N. Jojic, ?Free energy score-space,? NIPS
2009.
[11] A. Torralba, K.P. Murphy, W.T. Freeman and M.A. Rubin, ?Context-based vision system for
place and object recognition,? ICCV 2003, pp. 273-280.
[12] C. Stauffer, E. Miller, and K. Tieu, ?Transform invariant image decomposition with similarity
templates,? NIPS 2003.
[13] V. Ferrari , A. Zisserman, ?Learning Visual Attributes,? NIPS 2007.
[14] B. Russell, A. Efros, J. Sivic, B. Freeman, A. Zisserman ?Segmenting Scenes by Matching
Image Composites,? NIPS 2009.
[15] G. Bell and J. Gemmell, Total Recall. Dutton Adult 2009.
9
| 4092 |@word version:2 verona:2 decomposition:1 dramatic:1 configuration:1 contains:3 score:1 outperforms:1 existing:1 com:2 si:30 yet:2 must:1 readily:1 written:1 numerical:6 partition:1 shape:1 update:8 nebojsa:1 generative:3 half:1 device:1 parametrization:1 provides:1 quantized:1 contribute:1 location:12 codebook:1 five:1 direct:2 become:1 consists:1 ijcv:1 combine:1 paragraph:1 manner:1 introduce:1 mask:6 indeed:1 roughly:3 shirt:1 freeman:2 automatically:2 resolve:1 actual:1 little:1 accross:1 cardinality:1 becomes:1 provided:4 discover:1 matched:1 xx:1 panel:2 laptop:3 bike:1 emerging:1 supplemental:1 transformation:12 impractical:1 every:3 classifier:1 whatever:1 omit:3 segmenting:1 before:2 local:8 frey:3 limit:1 consequence:1 despite:1 subscript:1 might:2 chose:1 dynamically:1 limited:2 averaged:2 camera:3 testing:1 spot:1 area:5 bell:1 significantly:1 composite:1 matching:3 integrating:1 regular:3 doubled:1 onto:1 cannot:1 stel:68 epitome:94 context:2 www:2 equivalent:1 vittorio:1 map:6 demonstrated:1 straightforward:2 go:1 splitting:1 insight:2 rule:2 submodel:1 notion:1 variation:4 coordinate:5 ferrari:1 target:1 play:1 pt:1 designating:1 overlapped:1 element:5 peaky:1 particularly:2 updating:1 recognition:7 labeled:1 observed:2 role:2 inserted:1 capture:7 murino:3 calculate:1 region:4 russell:1 highest:2 alessandro:1 substantial:1 complexity:1 constrains:1 battery:1 trained:5 depend:1 segment:1 distinctive:1 misalignment:2 completely:1 joint:1 ously:1 emergent:1 various:2 train:1 separated:1 describe:1 labeling:1 outside:1 whose:2 quite:2 larger:7 cvpr:5 favor:1 beige:1 unseen:1 transform:1 superscript:1 tpami:1 dining:1 reconstruction:2 aligned:4 till:1 dirac:1 normalize:2 convergence:2 cluster:4 object:10 illustrate:2 recurrent:2 develop:1 depending:1 derive:1 exemplar:1 shadow:2 direction:2 functionality:1 modifying:1 criminisi:1 attribute:1 human:5 viewing:1 opinion:1 opt:3 summation:2 adjusted:1 practically:2 around:3 exp:2 mapping:13 week:6 major:1 vary:1 achieves:1 adopt:1 early:1 torralba:2 purpose:1 dictionary:1 efros:1 shadowing:1 bag:2 label:6 radio:1 sensitive:1 create:1 hope:1 mit:1 gaussian:1 rather:3 avoid:1 office:7 epitomic:3 derived:1 focus:2 ponce:1 consistently:2 panoramic:6 indicates:3 likelihood:1 stauffer:1 baseline:1 sense:1 summarizing:2 helpful:1 inference:4 nn:1 typically:1 entire:3 qthe:1 hidden:7 italian:1 perona:1 interested:2 pixel:15 overall:3 issue:1 flexible:1 classification:3 development:2 spatial:3 initialize:1 equal:1 field:1 once:1 manually:1 represents:1 perina:3 unsupervised:9 survive:1 look:2 foreground:2 simplify:1 pim:3 few:1 preserve:1 murphy:1 familiar:1 kitchen:2 geometry:2 phase:1 microsoft:4 organization:1 evaluation:1 severe:2 alignment:8 analyzed:1 yielding:1 edge:1 coat:1 experience:2 modest:1 indexed:1 instance:1 column:1 modeling:4 obstacle:1 assignment:2 cost:1 entry:4 uniform:1 recognizing:1 reported:2 dependency:1 tolerated:1 fundamental:1 probabilistic:4 off:1 rounded:1 vastly:1 opposed:1 tile:1 adversely:1 corner:1 derivative:1 pedestrian:3 depends:2 performed:2 break:1 lowe:1 linked:1 portion:1 start:1 ni:1 accuracy:5 variance:1 largely:1 reserved:1 miller:1 correspond:1 spaced:1 dealt:1 bayesian:2 iterated:1 lighting:2 worth:2 straight:1 converged:1 tended:1 energy:6 acquisition:1 frequency:1 pp:6 naturally:1 associated:3 mi:7 di:2 couple:1 gain:1 stop:1 dataset:13 wearable:2 proved:1 logical:1 recall:1 color:17 emerges:1 car:3 torso:3 segmentation:4 day:3 supervised:3 zisserman:2 done:1 though:4 evaluated:1 furthermore:1 just:1 ptest:1 correlation:1 ei:5 expressive:1 defines:1 mode:1 lda:2 effect:1 contain:1 remedy:1 assigned:3 jojic:9 imprecision:1 spatially:1 illustrated:2 deal:1 during:3 scientifically:2 prominent:1 evident:2 image:90 variational:4 lazebnik:1 novel:2 common:1 rotation:3 multinomial:2 discussed:2 linking:2 numerically:1 measurement:5 significant:2 consistency:1 grid:3 similarly:1 tennis:1 similarity:4 longer:2 add:2 align:2 posterior:9 recent:1 certain:1 life:3 seen:2 captured:4 additional:1 minimum:4 care:1 greater:1 employed:1 determine:1 period:3 living:1 ii:1 semi:1 multiple:5 mix:2 keypoints:1 characterized:1 long:2 divided:1 equally:1 vision:2 iteration:2 represent:3 normalization:4 kernel:1 pyramid:2 achieved:3 addition:3 whereas:1 background:1 winn:2 grow:1 leaving:1 appropriately:1 extra:1 subject:5 tend:3 undergo:1 induced:1 recording:2 member:1 mod:1 structural:5 estate:5 split:2 enough:1 rendering:1 iterate:1 affect:2 variety:1 castellani:1 idea:1 shift:8 motivated:2 expression:1 six:1 action:1 useful:1 disorganized:1 band:1 category:5 simplest:1 reduced:2 track:2 yy:1 per:1 diverse:1 discrete:3 vol:2 affected:1 four:4 sheer:1 threshold:1 gmm:1 imaging:1 compete:1 sti:1 everywhere:1 powerful:1 exponentiation:3 angle:1 place:1 family:1 patch:4 home:4 scaling:2 patience:1 capturing:2 layer:1 ct:1 followed:1 summer:1 display:1 constraint:1 fei:2 scene:31 afforded:1 flat:2 relatively:1 according:1 combination:2 belonging:1 across:4 smaller:1 em:3 leg:1 explained:2 invariant:5 indexing:4 iccv:3 taken:5 equation:1 previously:2 describing:1 needed:3 know:1 locus:1 stitching:2 available:1 gaussians:1 operation:1 hierarchical:1 away:1 appropriate:7 subtracted:1 slower:1 original:7 top:2 clustering:6 dirichlet:1 graphical:4 const:2 especially:1 codewords:1 parametric:1 strategy:3 usual:1 traditional:3 wrap:2 mapped:5 majority:2 topic:2 reason:1 kannan:2 index:14 illustration:1 providing:1 difficult:1 xri:5 rise:1 summarization:1 perform:1 imbalance:1 upper:1 convolution:1 datasets:4 sm:1 incorporated:1 head:1 frame:4 arbitrary:1 waking:2 intensity:5 inferred:1 palette:17 sivic:1 learned:3 hour:2 tractably:1 nip:4 adult:1 beyond:1 below:3 indoor:3 summarize:1 including:1 reliable:1 video:1 power:2 overlap:5 natural:2 force:1 indicator:2 representing:1 scheme:2 technology:1 x2i:2 numerous:1 schmid:1 text:1 prior:5 review:1 relative:1 highlight:1 interesting:7 generation:1 allocation:1 remarkable:2 consistent:2 rubin:1 leaning:1 maxt:1 row:2 elsewhere:1 summary:1 free:6 institute:1 wore:2 template:2 face:3 emerge:1 overcome:1 default:1 tieu:1 rich:1 forward:1 qualitatively:3 collection:4 avoided:1 cope:1 keep:2 reveals:1 assumed:3 xi:8 discriminative:3 continuous:2 iterative:1 latent:1 reviewed:1 table:3 dutton:1 learn:2 decoupling:1 did:1 significance:1 main:1 arise:1 allowed:1 fair:1 enriched:1 fig:15 enlarged:1 screen:2 x16:1 slow:1 aid:1 precision:12 position:3 inferring:1 outdoor:2 breaking:1 down:1 embed:1 xt:2 showing:1 sift:3 maxi:2 adding:1 illumination:5 illustrates:2 browsing:2 entropy:2 depicted:1 simply:3 appearance:2 likely:2 visual:9 temporarily:1 cristani:2 corresponds:1 caspi:1 conditional:1 goal:1 marked:1 room:3 considerable:1 change:7 feasible:1 typical:1 determined:1 uniformly:1 averaging:1 panorama:4 total:5 invariance:7 rarely:1 people:1 latter:3 brevity:2 |
3,415 | 4,093 | Deciphering subsampled data: adaptive compressive
sampling as a principle of brain communication
Guy Isely
Redwood Center for Theoretical Neuroscience
University of California, Berkeley
[email protected]
Christopher J. Hillar
Mathematical Sciences Research Institute
[email protected]
Friedrich T. Sommer
University of California, Berkeley
[email protected]
Abstract
A new algorithm is proposed for a) unsupervised learning of sparse representations from subsampled measurements and b) estimating the parameters required
for linearly reconstructing signals from the sparse codes. We verify that the new
algorithm performs efficient data compression on par with the recent method of
compressive sampling. Further, we demonstrate that the algorithm performs robustly when stacked in several stages or when applied in undercomplete or overcomplete situations. The new algorithm can explain how neural populations in
the brain that receive subsampled input through fiber bottlenecks are able to form
coherent response properties.
1
Introduction
In the nervous system, sensory and motor information, as well as internal brain states, are represented by action potentials in populations of neurons. Most localized structures, such as sensory
organs, subcortical nuclei and cortical regions, are functionally specialized and need to communicate through fiber projections to produce coherent brain function [14]. Computational studies of the
brain usually investigate particular functionally and spatially defined brain structures. Our scope
here is different as we are not concerned with any particular brain region or function. Rather, we
study the following fundamental communication problem: How can a localized neural population
interpret a signal sent to its synaptic inputs without knowledge of how the signal was sampled or
what it represents? We consider the generic case that information is encoded in the activity of a local
population (e.g. neurons of a sensory organ or a peripheral sensory area) and then communicated to
the target region through an axonal fiber projection. Any solution of this communication problem is
constrained by the following known properties of axonal fiber projections:
Exact point-to-point connectivity genetically undefined: During development, genetically informed chemical gradients coarsely guide the growth of fiber projections but are unlikely to specify
the precise synaptic patterns to target neurons [17]. Thus, learning mechanisms and synaptic plasticity seem necessary to form the precise wiring patterns from projection fibers to target neurons.
Fiber projections constitute wiring bottlenecks: The number of axons connecting a pair of regions
is often significantly smaller than the number of neurons encoding the representation within each
region [10]. Thus, communication across fiber projections seems to rely on a form of compression.
1
Sizes of origin and target regions may differ: In general, the sizes of the region sending the fibers
and the region targeted by them will be different. Thus, communication across fiber projections will
often involve a form of recoding.
We present a new algorithm for establishing and maintaining communication that satisfies all three
constraints above. To model imprecise wiring, we assume that connections between regions are
configured randomly and that the wiring scheme is unknown to the target region. To account for
the bottleneck, we assume these connections contain only subsampled portions of the information
emanating from the sender region; i.e., learning in the target region is based on subsampled data and
not the original.
Our work suggests that axon fiber projections can establish interfaces with other regions according
to the following simple strategy: Connect to distant regions randomly, roughly guided by chemical
gradients, then use local unsupervised learning at the target location to form meaningful representations of the input data. Our results can explain experiments in which retinal projections were
redirected neonatally to the auditory thalamus and the rerouting produced visually responsive cells in
auditory thalamus and cortex, with properties that are typical of cells in visual cortex [12]. Further,
our model makes predictions about the sparsity of neural representations. Specifically, we predict
that neuronal firing is sparser in locally projecting neurons (upper cortical layers) and less sparse in
neurons with nonlocal axonal fiber projections. In addition to the neurobiological impact, we also
address potential technical applications of the new algorithm and relations to other methods in the
literature.
2
Background
Sparse signals: It has been shown that many natural signals falling onto sensor organs have a higherorder structure that can be well-captured by sparse representations in an adequate basis; see [9, 6]
for visual input and [1, 11] for auditory. The following definitions are pivotal to this work.
Definition 1: An ensemble of signals X within Rn has sparse underlying structure if there is a
dictionary ? ? Rn?p so that any point x ? Rn drawn from X can be expressed as x = ?v for a
sparse vector v ? Rp .
Definition 2: An ensemble of sparse vectors V within Rp is a sparse representation of a signal
ensemble X in Rn if there exists a dictionary ? ? Rn?p such that the random variable X satisfies
X = ?V .
For theoretical reasons, we consider ensembles of random vectors (i.e. random variables) which
arise from an underlying probability distribution on some measure space, although for real data sets
(e.g. natural image patches) we cannot guarantee this to be the case. Nonetheless, the theoretical
consequences of this assumption (e.g. Theorem 4.2) appear to match what happens in practice for
real data (figures 2-4).
Compressive sampling with a fixed basis: Compressive sampling (CS) [2] is a recent method for
representing data with sparse structure using fewer samples than required by the Nyquist-Shannon
theorem. In one formulation [15], a signal x ? Rn is assumed to be k-sparse in an n ? p dictionary
matrix ?; that is, x = ?a for some vector a ? Rp with at most k nonzero entries. Next, x is
subsampled using an m ? n incoherent matrix ? to give noisy measurements y = ?x + w with
m n and independent noise w ? N (0, ? 2 Im?m ). To recover the original signal, the following
convex optimization problem (called Lasso in the literature) is solved:
1
b
b(y)
:= arg min
||y ? ??b||22 + ?|b|1 ,
(1)
a
2n
b is set to be the approximate recovery of x. Remarkably, as can be shown using
b := ?b
and then x
b and is guaranteed to be exact within
[15, Theorem 1], the preceding algorithm determines a unique b
the noise range:
b||2 = O(?)
||x ? x
(2)
with high probabilityp
(exponential in m/k) as long as the matrix ?? satisfies mild incoherence
hypotheses, ? = ?(? (log p)/m), and the sparsity is on the order k = O(m/ log p).
2
Typically, the matrix ? is p ? p orthogonal, and the incoherence conditions reduce to deterministic
constraints on ? only. Although in general it is very difficult to decide whether a given ? satisfies
these conditions, it is known that many random ensembles, such as i.i.d. ?ij ? N (0, 1/m), satisfy
them with high probability. In particular, compression ratios on the order (k log p)/p are achievable
for k-sparse signals using a random ? chosen this way.
Dictionary learning by sparse coding: For some natural signals there are well-known bases (e.g.
Gabor wavelets, the DCT) in which those signals are sparse or nearly sparse. However, an arbitrary
class of signals can be sparse in unknown bases, some of which give better encodings than others.
It is compelling to learn a sparse dictionary for a class of signals instead of specifying one in advance. Sparse coding methods [6] learn dictionaries by minimizing the empirical mean of an energy
function that combines the `2 reconstruction error with a sparseness penalty on the encoding:
E(x, a, ?) = ||x ? ?a||22 + ?S(a).
(3)
A common choice for the sparsity penalty S(a) that works well in practice is the `1 penalty
b(x) that
S(a) = |a|1 . Fixing ? and x and minimizing (3) with respect to a produces a vector a
approximates a sparse encoding for x.1 For a fixed set of signals x and encodings a, minimizing
the mean value of (3) with respect to ? and renormalizing columns produces an improved sparse
dictionary. Alternating optimization steps of this form, one can learn a dictionary that is tuned to the
statistics of the class of signals studied. Sparse coding on natural stimuli has been shown to learn
basis vectors that resemble the receptive fields of neurons in early sensory areas [6, 7, 8]. Notice that
b(x) from
once an (incoherent) sparsity-inducing dictionary ? is learned, inferring sparse vectors a
signals x is an instance of the Lasso convex optimization problem.
Blind Compressed Sensing: With access to an uncompressed class of sparse signals, dictionary
learning can find a sparsity-inducing basis which can then be used for compressive sampling. But
what if the uncompressed signal is unavailable? Recently, this question was investigated in [4] using
the following problem statement.
Blind compressed sensing (BCS): Given a measurement matrix ? and measurements {y1 , . . . , yN }
of signals {x1 , . . . , xN } drawn from an ensemble X, find a dictionary ? and k-sparse vectors
{b1 , . . . , bN } such that xi = ?bi for each i = 1, . . . , N .
It turns out that the BCS problem is ill-posed in the general case [4]. The difficulty is that though
it is possible to learn a sparsity-inducing dictionary ? for the measurements Y , there are many
decompositions of this dictionary into ? and a matrix ? since ? has a nullspace. Thus, without
additional assumptions, one cannot uniquely recover a dictionary ? that can reconstruct x as ?b.
3
Adaptive Compressive Sampling
It is tantalizing to hypothesize that a neural population in the brain could combine the principles
of compressive sampling and dictionary learning to form sparse representations of inputs arriving
through long-range fiber projections. Note that information processing in the brain should rely on
faithful representations of the original signals but does not require a solution of the ill-posed BCS
problem which involves the full reconstruction of the original signals. Thus, the generic challenge
a neural population embedded in the brain might have to solve can be captured by the following
problem.
Adaptive compressive sampling (ACS): Given measurements Y = ?X generated from an unknown
? and unknown signal ensemble X with sparse underlying structure, find signals B(Y ) which are
sparse representations of X.
Note the two key differences between the ACS and the BCS problem. First, the ACS problem
asks only for sparse representations b of the data, not full reconstruction. Second, the compression
matrix ? is unknown in the ACS problem but is known in the BCS problem. Since it is unrealistic
to assume that a brain region could have knowledge of how an efferent fiber bundle subsamples the
brain region it originates from, the second difference is also crucial. We propose a relatively simple
algorithm for potentially solving the ACS problem: use sparse coding for dictionary learning in the
1
As a convention in this paper, a vs. b denotes a sparse representation inferred from full vs. compressed
signals.
3
Figure 1: ACS schematic. A signal x with sparse structure in dictionary ? is sampled by a compressing
measurement matrix ?, constituting
a transmission bottleneck. The ACS
coding circuit learns a dictionary ?
for y in the compressed space, but
can be seen to form sparse representations b of the original data x as
witnessed by the matrix RM in (6).
RM
a
?
x
?
y
?
b
compressed space. The proposed ACS objective function is defined as:
E(y, b, ?) = ||y ? ?b||22 + ?S(b).
(4)
Iterated minimization of the empirical mean of this function first with respect to b and then with
respect to ? will produce a sparsity dictionary ? for the compressed space and sparse representations
b
b(y)
of the y. Our results verify theoretically and experimentally that once the dictionary matrix ?
has converged, the objective (4) can be used to infer sparse representations of the original signals x
from the compressed data y. As has been shown in the BCS work, one cannot uniquely determine
? with access only to the compressed signals y. But this does not imply that no such matrix exists.
In fact, given a separate set of uncompressed signals x0 , we calculate a reconstruction matrix RM
b are indeed sparse representations of the original x. Importantly, the x0 are
demonstrating that the b
not used to solve the ACS problem, but rather to demonstrate that a solution was found.
The process for computing RM using the x0 is analogous to the process used by electrophysiologists
to measure the receptive fields of neurons. Electrophysiologists are interested in characterizing how
neurons in a region respond to different stimuli. They use a simple approach to determine these
stimulus-response properties: probe the neurons with an ensemble of stimuli and compute stimulusresponse correlations. Typically it is assumed that a neural response b is a linear function of the
stimulus x; that is, b = RF x for some receptive field matrix RF . One may then calculate an RF
by minimizing the empirical mean of the prediction error: E(RF ) = kb ? RF xk22 . As shown in
?1
Csr , in which Css is the stimulus
[13], the closed-form solution to this minimization is RF = Css
>
autocorrelation matrix hxx iX , and Csr is the stimulus-response cross-correlation matrix hxb> iX .
In contrast to the assumption of a linear response typically made in electrophysiology, here we
assume a linear generative model: x = ?a. Thus, instead of minimizing the prediction error, we
ask for the reconstruction matrix RM that minimizes the empirical mean of the reconstruction error:
E(RM ) = kx ? RM bk22 .
(5)
In this case, the closed form solution of this minimization is given by
?1
RM = Csr Crr
,
(6)
in which Csr is the stimulus-response cross-correlation matrix as before and Crr is the response
>
b
b
autocorrelation matrix hb(y(x))
b(y(x))
iX . As we show below, calculating (6) from a set of
0
b as x = RM b.
b
uncompressed signals x yields an RM that reconstructs the original signal x from b
b
Thus, we can conclude that encodings b computed by ACS are sparse representations of the original
signals.
4
Theoretical Results
The following hold for ACS under mild hypotheses (we postpone details for a future work).
Theorem 4.1 Suppose that an ensemble of signals is compressed with a random projection ?. If
ACS converges on a sparsity-inducing dictionary ? and Crr is invertible, then ? = ? ? RM .
Theorem 4.2 Suppose that an ensemble of signals has a sparse representation with dictionary ?.
If ACS converges on a sparsity-inducing dictionary, then the outputs of ACS are a sparse representation for the original signals in the dictionary of the reconstruction matrix RM given by (6). Moreover, there exists a diagonal matrix D and a partial permutation matrix P such that ? = RM ? DP .
4
(a)
(b)
(c)
Figure 2: Subsets of the reconstruction matrices RM for the ACS networks trained on synthetic
sparse data generated using bases (a) standard 2D, (b) 2D DCT, (c) learned by sparse coding on
natural images. The components of RM in (a) and (b) are arranged by spatial location and spatial
frequency respectively to help with visual interpretation.
5
Experimental results
To demonstrate that the ACS algorithm solves the ACS problem in practice, we train ACS networks
on synthetic and natural image patches. We use 16 ? 16 image patches which are compressed by
an i.i.d. gaussian measurement matrix before ACS sees them. Unless otherwise stated we use a
compression factor of 2; that is, the 256 dimensional patches were captured by 128 measurements
sent to the ACS circuit (current experiments are successful with a compression factor of 10). The
feature sign algorithm developed in [5] is used for inference of b in (4). After the inference step,
? is updated using gradient decent in (4). The matrix ? is initialized randomly and renormalized
to have unit length columns after each learning step. Learning is performed until the ACS circuit
converges on a sparsity basis for the compressed space.
To assess whether the sparse representations formed by the ACS circuit are representations of the
original data, we estimate a reconstruction matrix RM as in (6) by correlating a set of 10,000
uncompressed image patches with their encodings b in the ACS circuit. Using RM and the ACS
circuit, we reconstruct original data from compressed data. Reconstruction performance is evaluated
on a test set of 1000 image patches
by computing
the signal-to-noise ratio of the reconstructed
h||x||22 iX
b: SN R = 10 log10
signals x
. For comparison, we also performed CS using the
h||x?b
x||22 iX
b=
feature sign algorithm to solve (1) using a fixed sparsity basis ? and reconstruction given by x
b
?b.
Synthetic Data: To assess ACS performance on data of known sparsity we first generate synthetic
image patches with sparse underlying structure in known bases. We test with three different bases:
the standard 2D basis (i.e. single pixel images), the 2D DCT basis, and a Gabor-like basis learned
by sparse coding on natural images. We generate random sparse binary vectors with k = 8, multiply
these vectors by the chosen basis to get images, and then compress these images to half their original
lengths to get training data. For each type of synthetic data, a separate ACS network is trained with
? = .1 and reconstruction matrix RM is computed. The RM corresponding to each generating basis
type is shown in Figure 2(a)-(c). We can see that RM closely resembles a permutation of generating
basis as predicted by Theorem 4.2. The mean SNR of the reconstructed signals in each case is 34.05
dB, 47.05 dB, and 36.38 dB respectively. Further, most ACS encodings are exact in the sense that
they exactly recovered the components used to synthesize the original image. Specifically, for the
DCT basis 95.4% of ACS codes have the same eight active basis vectors as were used to generate
the original image patch. Thresholding to remove small coefficients (coring) makes it 100%.
To explore how ACS performs in cases where the signals cannot be modeled exactly with sparse
representations, we generate sparse synthetic data (k = 8) with the 2D DCT basis and add gaussian
noise. Figure 3(a) compares reconstruction fidelity of ACS and CS for increasing levels of noise.
5
(a)
(b)
(c)
(d)
Figure 3: Mean SNR of reconstructions. (a) compares ACS performance to CS performance with
true generating basis (DCT) for synthetic images with increasing amounts of gaussian noise. (b) and
(c) compare the performances of ACS, CS with a basis learned by sparse coding on natural images
and CS with the DCT basis. Performances plotted against the compression factor (b) and the value
of ? used for encoding. (d) shows ACS performance on natural images vs. the completeness factor.
(a)
(b)
Figure 4: (a) RM for an ACS network trained on natural images with compression factor of 2, (b)
ACS reconstruction of a 128 ? 128 image using increasing compression factors. Clockwise from
the top left: the original image, ACS with compression factors of 2, 4, and 8.
For pure sparse data (noise ? 2 = 0) CS outperforms ACS significantly. Without noise, CS is limited
by machine precision and reaches a mean SNR which is off the chart at 308.22 dB whereas ACS
is limited by inaccuracies in the learning process as well as inaccuracies in computing RM . For a
large range of noise levels CS and ACS performance become nearly identical. For very high levels
of noise CS and ACS performances begin to diverge as the advantage of knowing the true sparsity
basis becomes apparent again.
Natural Images: Natural image patches have sparse underlying structure in the sense that they can
be well approximated by sparse linear combinations of fixed bases, but they cannot be exactly reconstructed at a level of sparsity required by the theorems of CS and ACS. Thus, CS and ACS cannot be
6
expected to produce exact reconstructions of natural image patches. To explore the performance of
ACS on natural images we train ACS models on compressed image patches from whitened natural
images. The RM matrix for an ACS network using the default compression factor of 2 is shown in
Figure 4(a).
Next we explore how the fidelity of ACS reconstructions varies with the compression factor. Figure
4(b) shows an entire image portion reconstructed patch-wise by ACS for increasing compression
factors. Figure 3(b) compares the SNR of these reconstructions to CS reconstructions. Since there
is no true sparsity basis for natural images, we perform CS either with a dictionary learned from
uncompressed natural images using sparse coding or with the 2D DCT. Both the ACS sparsity basis
and sparse coding basis used with CS are learned with ? fixed at .1 in eq. (3). 3(b) demonstrates
that CS performs much better with the learned dictionary than with the standard 2D DCT. Further,
the plot shows that ACS reconstructions produces slightly higher fidelity reconstructions than CS.
However, the comparison between CS and ACS might be confounded by the sensitivity of these
algorithms to the value of ? used during encoding.
In the context of CS, there is a sweet spot for the sparsity of representations. More sparse encodings
have a better chance of being accurately recovered from the measurements because they obey conditions of the CS theorems better. At the same time, these are less likely to be accurate encodings
of the original signal since they are limited to fewer of the basis vectors for their reconstructions.
As a result, reconstruction fidelity as a function of ? has a maximum at the sweet spot of sparsity
for CS (decreasing the value of ? leads to sparser representations). Values of ? below this point
produce representations that are not sparse enough to be accurately recovered from the compressed
measurements, while values of ? above it produce representations that are too sparse to accurately
model the original signal even if they could be accurately recovered.
To explore how the performance of CS and ACS depends on the sparseness of their representations,
we vary the value of ? used while encoding. Figure 3(c) compares ACS, CS with a sparse coding
basis, and CS with the 2D DCT basis. Once again we see that ACS performs slightly better than
CS with a learned dictionary, and much better than CS with the DCT basis. However, the shape
of the curves with respect to the choice of ? while encoding suggests that our choice of value for
? while learning (.1 for both ACS and the sparse coding basis used with CS) may be suboptimal.
Additionally, the optimal value of ? for CS may differ from the optimal value of ? for ACS. For
these reasons, it is unclear if ACS exceeds the SNR performance of CS with dictionary learning
when in the optimal regime for both approaches. Most likely, as 3(b) suggests, their performances
are not significantly different. However, one reason ACS might perform better is that learning a
sparsity basis in compressed space tunes the sparsity basis with respect to the measurement matrix
whereas performing dictionary learning for CS estimates the sparsity basis independently of the
measurement matrix. Additionally, having its sparsity basis in the compressed space means that
ACS is more efficient in terms of runtime than dictionary learning for CS because the lengths of
basis vectors are reduced by the compression factor.
ACS in brain communication: When considering ACS as a model of communication in the brain,
one important question is whether it works when the representational dimensions vary from region
to region. Typically in CS, the number of basis functions is chosen to equal the dimension of the
original space. To demonstrate how ACS could model the communication between regions with
different representation dimensions, we train ACS networks whose encoding dimensions are larger
or smaller than the dimension of the original space (overcomplete or undercomplete). As shown in
figure 3(d), the reconstruction fidelity decreases in the undercomplete case because representations
in that space either have fewer total active coding vectors or are significantly less sparse. Interestingly, the reconstruction fidelity increases in the overcomplete case. We suspect that this gain from
overcompleteness also applies in standard CS with an overcomplete dictionary, but this has not been
tested so far.
Figure 5: A subset of RM from each stage of our multistage ACS model.
7
Another issue to consider for ACS as a model of communication in the brain is whether signal fidelity
is preserved through repeated communications. To investigate this question we simulated multiple
stages of communication using ACS. In our model the input of compressed natural image patches is
encoded as a sparse representation in the first region, transmitted as a compressed signal to a second
region where it is encoded sparsely, and compressively transmitted once again to a third region that
performs the final encoding. Obviously, this is a vacuous model of neural computation since there is
little use in simply retransmitting the same signal. A meaningful model of cortical processing would
involve additional local computations on the sparse representations before retransmission. However,
this basic model can help us explore the effects of repeated communication by ACS. Using samples
from the uncompressed space, we compute RM for each stage just as for a single stage model.
Figure 5 shows subsets of the components of RM for each stage. Notice that meaningful gabor-like
structure is preserved between stages.
6
Discussion
In this paper, we propose ACS, a new algorithm for learning meaningful sparse representations
of compressively sampled signals without access to the full signals. Two crucial differences set
ACS apart from traditional CS. First, the ACS coding circuit is formed by unsupervised learning
on subsampled signals and does not require knowledge of the sparsity basis of the signals nor of
the measurement matrix used for subsampling. Second, the information in the fully trained ACS
coding circuit is insufficient to reconstruct the original signals. To assess the usefulness of the representations formed by ACS, we developed a second estimation procedure that probes the trained
ACS coding circuit with the full signals and correlates signal with encoding. Similarly to the electrophysiological approach of computing receptive fields, we computed a reconstruction matrix RM .
Theorem 4.2 proves that after convergence, ACS produces representations of the full data and that
the estimation procedure finds a reconstruction matrix which can reproduce the full data. Further,
our simulation experiments revealed that the RM matrix contained smooth receptive fields resembling oriented simple cells (Figures 2 and 4), suggesting that the ACS learning scheme can explain
the formation of receptive fields even when the input to the cell population is undersampled (and
thus conventional sparse coding would falter). In addition, the combination of ACS circuit and RM
matrix can be used in practice for data compression and be directly compared with traditional CS.
Interestingly, ACS is fully on par with CS in terms of reconstruction quality (Figure 3). At the same
time it is both flexible and stackable, and it works in overcomplete and undercomplete cases.
The recent work on BCS [4] addressed a similar problem where the sparsity basis of compressed
samples is unknown. A main difference between BCS and ACS is that BCS aims for full reconstruction of the original signals from compressed signals whereas ACS does not. As a consequence, BCS
is generally ill-posed [4], whereas ACS permits a solution, as we have shown. We have argued that
full data reconstruction is not a prerequisite for communication between brain regions. However,
note that ACS can be made a full reconstruction algorithm if there is limited access to uncompressed
signal. Thus, neither ACS nor practical applications of BCS are fully blind learning algorithms, as
both rely on further constraints [4] inferred from the original data. An alternative to ACS / BCS for
introducing learning in CS was to adapt the measurement matrix to data [3, 16].
The engineering implications of ACS merit further exploration. In particular, our compression results with overcomplete ACS indicate that the reconstruction quality was significantly higher than
with standard CS. Additionally, the unsupervised learning with ACS may have advantages in situations where access to uncompressed signals is limited or very expensive to acquire. With ACS it is
possible to do the heavy work of learning a good sparsity basis entirely in the compressed space and
only a small number of samples from the uncompressed space are required to reconstruct with RM .
Perhaps the most intriguing implications of our work concern neurobiology. Our results clearly
demonstrate that meaningful sparse representations can be learned on the far end of wiring bottlenecks, fully unsupervised, and without any knowledge of the subsampling scheme. In addition, ACS
with overcomplete or undercomplete codes suggests how sparse representations can be communicated between neural populations of different sizes. From our study, we predict that firing patterns
of neurons sending long-range axons might be less sparse than those involved in local connectivity,
a hypothesis that could be experimentally verified. It is intriguing to think that the elegance and
simplicity of compressive sampling and sparse coding could be exploited by the brain.
8
References
[1] A. Bell and T. Sejnowski. Learning the higher-order structure of a natural sound. Network:
Computation in Neural Systems, 7(2):261?266, 1996.
[2] E.J. Cand`es. Compressive sampling. In Proceedings of the International Congress of Mathematicians, volume 3, pages 1433?1452. Citeseer, 2006.
[3] M. Elad. Optimized projections for compressed sensing. IEEE Transactions on Signal Processing, 55(12):5695?5702, 2007.
[4] S. Gleichman and Y.C. Eldar. Blind Compressed Sensing. preprint, 2010.
[5] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. Advances in
neural information processing systems, 19:801, 2007.
[6] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature, 381(6583):607?609, 1996.
[7] M. Rehn and F.T. Sommer. A network that uses few active neurones to code visual input
predicts the diverse shapes of cortical receptive fields. Journal of Computational Neuroscience,
22(2):135?146, 2007.
[8] C.J. Rozell, D.H. Johnson, R.G. Baraniuk, and B.A. Olshausen. Sparse coding via thresholding
and local competition in neural circuits. Neural computation, 20(10):2526?2563, 2008.
[9] D.L. Ruderman and W. Bialek. Statistics of natural images: Scaling in the woods. Physical
Review Letters, 73(6):814?817, 1994.
[10] A. Sch?uz, D. Chaimow, D. Liewald, and M. Dortenman. Quantitative aspects of corticocortical
connections: a tracer study in the mouse. Cerebral Cortex, 16(10):1474, 2006.
[11] E.C. Smith and M.S. Lewicki. Efficient auditory coding. Nature, 439(7079):978?982, 2006.
[12] M. Sur, P.E. Garraghty, and A.W. Roe. Experimentally induced visual projections into auditory
thalamus and cortex. Science(Washington), 242(4884):1437?1437, 1988.
[13] F.E. Theunissen, S.V. David, N.C. Singh, A. Hsu, W.E. Vinje, and J.L. Gallant. Estimating
spatio-temporal receptive fields of auditory and visual neurons from their responses to natural
stimuli. Network: Computation in Neural Systems, 12(3):289?316, 2001.
[14] D.C. Van Essen, C.H. Anderson, and D.J. Felleman. Information processing in the primate
visual system: an integrated systems perspective. Science, 255(5043):419?423, 1992.
[15] M.J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using
ell1 -constrained quadratic programming (Lasso). IEEE Trans. Information Theory, pages
2183?2202, 2009.
[16] Y. Weiss, H. Chang, and W. Freeman. Learning compressed sensing. In Snowbird Learning
Workshop, Allerton, CA. Citeseer, 2007.
[17] R.J. Wyman and J.B. Thomas. What genes are necessary to make an identified synapse? In
Cold Spring Harbor Symposia on Quantitative Biology, volume 48, page 641. Cold Spring
Harbor Laboratory Press, 1983.
9
| 4093 |@word mild:2 achievable:1 compression:16 seems:1 simulation:1 bn:1 decomposition:1 citeseer:2 asks:1 tuned:1 interestingly:2 outperforms:1 current:1 recovered:4 intriguing:2 dct:11 distant:1 plasticity:1 shape:2 motor:1 hypothesize:1 remove:1 plot:1 v:3 generative:1 fewer:3 half:1 nervous:1 smith:1 completeness:1 location:2 allerton:1 org:1 mathematical:1 become:1 symposium:1 redirected:1 combine:2 autocorrelation:2 x0:3 theoretically:1 expected:1 indeed:1 roughly:1 cand:1 nor:2 uz:1 brain:17 freeman:1 decreasing:1 little:1 considering:1 increasing:4 becomes:1 begin:1 estimating:2 underlying:5 moreover:1 circuit:11 what:4 minimizes:1 developed:2 compressive:10 informed:1 mathematician:1 guarantee:1 temporal:1 berkeley:4 quantitative:2 growth:1 runtime:1 exactly:3 rm:30 demonstrates:1 originates:1 unit:1 appear:1 yn:1 before:3 engineering:1 local:5 congress:1 consequence:2 encoding:17 establishing:1 firing:2 incoherence:2 might:4 studied:1 resembles:1 suggests:4 specifying:1 limited:5 range:4 bi:1 unique:1 faithful:1 practical:1 practice:4 postpone:1 communicated:2 spot:2 procedure:2 cold:2 area:2 empirical:4 bell:1 significantly:5 gabor:3 projection:15 imprecise:1 get:2 onto:1 cannot:6 context:1 conventional:1 deterministic:1 center:1 hillar:1 resembling:1 independently:1 convex:2 simplicity:1 recovery:2 pure:1 importantly:1 population:8 analogous:1 updated:1 cs:2 target:7 suppose:2 exact:4 programming:1 us:1 hypothesis:3 origin:1 synthesize:1 approximated:1 expensive:1 rozell:1 corticocortical:1 sparsely:1 predicts:1 theunissen:1 preprint:1 solved:1 calculate:2 region:24 compressing:1 decrease:1 multistage:1 renormalized:1 trained:5 singh:1 solving:1 basis:36 represented:1 fiber:14 stacked:1 train:3 sejnowski:1 emanating:1 formation:1 apparent:1 encoded:3 posed:3 solve:3 whose:1 larger:1 elad:1 reconstruct:4 compressed:24 otherwise:1 statistic:2 think:1 emergence:1 noisy:2 final:1 obviously:1 subsamples:1 advantage:2 reconstruction:32 propose:2 representational:1 inducing:5 competition:1 convergence:1 transmission:1 produce:9 renormalizing:1 generating:3 converges:3 help:2 ac:87 fixing:1 snowbird:1 ij:1 eq:1 solves:1 c:38 resemble:1 involves:1 predicted:1 convention:1 differ:2 indicate:1 guided:1 closely:1 kb:1 exploration:1 require:2 argued:1 im:1 hold:1 visually:1 scope:1 predict:2 dictionary:31 early:1 vary:2 estimation:2 organ:3 overcompleteness:1 minimization:3 clearly:1 sensor:1 gaussian:3 aim:1 rather:2 compressively:2 contrast:1 sense:2 inference:2 unlikely:1 typically:4 entire:1 integrated:1 relation:1 reproduce:1 interested:1 pixel:1 arg:1 fidelity:7 ill:3 issue:1 flexible:1 eldar:1 development:1 constrained:2 spatial:2 field:10 once:4 equal:1 having:1 ng:1 sampling:10 washington:1 biology:1 identical:1 represents:1 uncompressed:10 unsupervised:5 nearly:2 future:1 others:1 stimulus:9 sweet:2 few:1 randomly:3 oriented:1 subsampled:7 investigate:2 essen:1 multiply:1 undefined:1 bundle:1 implication:2 accurate:1 partial:1 necessary:2 orthogonal:1 unless:1 initialized:1 plotted:1 overcomplete:7 theoretical:4 instance:1 column:2 witnessed:1 compelling:1 introducing:1 entry:1 subset:3 snr:5 deciphering:1 undercomplete:5 stimulusresponse:1 successful:1 usefulness:1 johnson:1 too:1 connect:1 varies:1 synthetic:7 fundamental:1 sensitivity:1 international:1 lee:1 off:1 invertible:1 diverge:1 connecting:1 mouse:1 connectivity:2 again:3 reconstructs:1 guy:1 falter:1 account:1 potential:2 suggesting:1 retinal:1 coding:21 coefficient:1 configured:1 satisfy:1 blind:4 depends:1 performed:2 closed:2 portion:2 recover:2 ass:3 formed:3 chart:1 ensemble:10 yield:1 crr:3 iterated:1 accurately:4 produced:1 converged:1 explain:3 reach:1 synaptic:3 definition:3 against:1 nonetheless:1 energy:1 frequency:1 involved:1 elegance:1 efferent:1 sampled:3 auditory:6 gain:1 hsu:1 ask:1 knowledge:4 electrophysiological:1 higher:3 response:8 specify:1 improved:1 wei:1 formulation:1 arranged:1 though:1 evaluated:1 anderson:1 synapse:1 just:1 stage:7 correlation:3 until:1 christopher:1 ruderman:1 quality:2 perhaps:1 olshausen:2 effect:1 verify:2 contain:1 true:3 chemical:2 spatially:1 alternating:1 nonzero:1 laboratory:1 wiring:5 during:2 uniquely:2 ell1:1 demonstrate:5 felleman:1 performs:6 interface:1 image:31 wise:1 recently:1 common:1 specialized:1 physical:1 volume:2 cerebral:1 interpretation:1 approximates:1 functionally:2 interpret:1 measurement:15 bk22:1 similarly:1 access:5 cortex:4 base:6 add:1 recent:3 perspective:1 apart:1 binary:1 rerouting:1 exploited:1 captured:3 seen:1 additional:2 transmitted:2 preceding:1 determine:2 signal:56 clockwise:1 multiple:1 full:10 bcs:12 thalamus:3 infer:1 sound:1 exceeds:1 technical:1 match:1 smooth:1 adapt:1 cross:2 long:3 impact:1 prediction:3 schematic:1 basic:1 whitened:1 roe:1 cell:5 receive:1 addition:3 background:1 remarkably:1 whereas:4 preserved:2 addressed:1 crucial:2 sch:1 suspect:1 induced:1 sent:2 db:4 seem:1 axonal:3 revealed:1 enough:1 concerned:1 hb:1 decent:1 harbor:2 lasso:3 suboptimal:1 identified:1 reduce:1 knowing:1 bottleneck:5 whether:4 nyquist:1 penalty:3 neurones:1 constitute:1 action:1 adequate:1 generally:1 involve:2 tune:1 amount:1 locally:1 reduced:1 generate:4 notice:2 sign:2 neuroscience:2 msri:1 diverse:1 coarsely:1 key:1 demonstrating:1 threshold:1 falling:1 drawn:2 neither:1 verified:1 wood:1 letter:1 baraniuk:1 communicate:1 respond:1 decide:1 patch:13 garraghty:1 scaling:1 entirely:1 layer:1 guaranteed:1 quadratic:1 activity:1 constraint:3 aspect:1 min:1 spring:2 performing:1 relatively:1 according:1 peripheral:1 combination:2 battle:1 smaller:2 across:2 reconstructing:1 slightly:2 primate:1 happens:1 projecting:1 xk22:1 turn:1 mechanism:1 merit:1 end:1 sending:2 confounded:1 permit:1 prerequisite:1 probe:2 eight:1 obey:1 generic:2 robustly:1 responsive:1 alternative:1 rp:3 original:23 compress:1 denotes:1 top:1 sommer:2 subsampling:2 thomas:1 maintaining:1 log10:1 calculating:1 prof:1 establish:1 objective:2 question:3 strategy:1 receptive:9 diagonal:1 traditional:2 unclear:1 bialek:1 gradient:3 dp:1 higherorder:1 separate:2 simulated:1 hxx:1 reason:3 code:5 length:3 modeled:1 sur:1 insufficient:1 ratio:2 minimizing:5 acquire:1 difficult:1 statement:1 potentially:1 electrophysiologists:2 stated:1 unknown:6 perform:2 gallant:1 upper:1 neuron:13 situation:2 neurobiology:1 communication:14 precise:2 y1:1 rn:6 redwood:1 arbitrary:1 sharp:1 csr:4 inferred:2 david:1 vacuous:1 pair:1 required:4 friedrich:1 connection:3 optimized:1 california:2 coherent:2 learned:9 inaccuracy:2 trans:1 address:1 able:1 usually:1 pattern:3 below:2 regime:1 sparsity:26 challenge:1 genetically:2 rf:6 wainwright:1 unrealistic:1 natural:22 rely:3 difficulty:1 undersampled:1 raina:1 representing:1 scheme:3 imply:1 incoherent:2 tracer:1 sn:1 review:1 literature:2 embedded:1 fully:4 par:2 permutation:2 isely:1 subcortical:1 vinje:1 localized:2 nucleus:1 coring:1 rehn:1 principle:2 thresholding:2 heavy:1 arriving:1 guide:1 institute:1 characterizing:1 sparse:69 recoding:1 van:1 curve:1 default:1 cortical:4 xn:1 dimension:5 sensory:5 made:2 adaptive:3 far:2 constituting:1 correlate:1 nonlocal:1 reconstructed:4 approximate:1 transaction:1 neurobiological:1 gene:1 correlating:1 active:3 b1:1 assumed:2 conclude:1 spatio:1 xi:1 additionally:3 learn:5 nature:2 ca:1 unavailable:1 investigated:1 main:1 linearly:1 noise:10 arise:1 repeated:2 pivotal:1 x1:1 neuronal:1 axon:3 precision:1 inferring:1 exponential:1 nullspace:1 third:1 wavelet:1 learns:1 ix:5 theorem:9 sensing:5 concern:1 exists:3 workshop:1 sparseness:2 kx:1 sparser:2 tantalizing:1 electrophysiology:1 simply:1 explore:5 sender:1 likely:2 visual:7 expressed:1 contained:1 lewicki:1 chang:1 applies:1 satisfies:4 determines:1 chance:1 targeted:1 experimentally:3 typical:1 specifically:2 called:1 total:1 experimental:1 e:1 shannon:1 meaningful:5 internal:1 tested:1 |
3,416 | 4,094 | Word Features for Latent Dirichlet Allocation
James Petterson1, Alex Smola2, Tiberio Caetano1, Wray Buntine1, Shravan Narayanamurthy3
1
NICTA and ANU, Canberra, ACT, Australia
2
Yahoo! Research, Santa Clara, CA, USA
3
Yahoo! Research, Bangalore, India
Abstract
We extend Latent Dirichlet Allocation (LDA) by explicitly allowing for the encoding of side information in the distribution over words. This results in a variety
of new capabilities, such as improved estimates for infrequently occurring words,
as well as the ability to leverage thesauri and dictionaries in order to boost topic
cohesion within and across languages. We present experiments on multi-language
topic synchronisation where dictionary information is used to bias corresponding words towards similar topics. Results indicate that our model substantially
improves topic cohesion when compared to the standard LDA model.
1
Introduction
Latent Dirichlet Allocation [4] assigns topics to documents and generates topic distributions over
words given a collection of texts. In doing so, it ignores any side information about the similarity
between words. Nonetheless, it achieves a surprisingly high quality of coherence within topics.
The inability to deal with word features makes LDA fall short on several aspects. The most obvious
one is perhaps that the topics estimated for infrequently occurring words are usually unreliable.
Ideally, for example, we would like the topics associated with synonyms to have a prior tendency of
being similar, so that in case one of the words is rare but the other is common, the topic estimates
for the rare one can be improved. There are other examples. For instance, it is quite plausible that
?Germany? and ?German?, or ?politics?, ?politician?, and ?political? should, by default, belong to
the same topic. Similarly, we would like to be able to leverage dictionaries in order to boost topic
cohesion across languages, a problem that has been researched but is far from being fully solved,
especially for non-aligned corpora [6]. For example, we know that ?democracy? and ?democracia?
are different words, but it is clear that not leveraging the fact they actually mean the same thing (and
therefore should have aligned topics) reduces the statistical strength of a model.
A possible solution, which we propose in this paper, is to treat word information as features rather
than as explicit constraints and to adjust a smoothing prior over topic distributions for words such
that correlation is emphasised. In the parlance of LDA we do not pick a globally constant ? smoother
over the word multinomials but rather we adjust it according to word similarity. In this way we are
capable of learning the prior probability of how words are distributed over various topics based on
how similar they are, e.g. in the context of dictionaries, synonym collections, thesauri, edit distances,
or distributional word similarity features.
Unfortunately, in performing such model extension we lose full tractability of the setting by means
of a collapsed Gibbs sampler. Instead, we use a hybrid approach where we perform smooth optimisation over the word smoothing coefficients, while retaining a collapsed Gibbs sampler to assign
topics for a fixed choice of smoothing coefficients. The advantage of this setting is that it is entirely
modular and can be added to existing Gibbs samplers without modification.
We present experimental results on multi-language topic synchronisation which clearly evidence the
ability of the model to incorporate dictionary information successfully. Using several different measures of topic alignment, we consistently observe that the proposed model improves substantially on
standard LDA, which is unable to leverage this type of information.
1
?
?m
zmn
for k = 1 to K
for m = 1 to M
?
?m
?
zmn
?m
wmn
?kv
for n = 1 to Nm
for v = 1 to V
zmn
?
for k = 1 to K
y
for k = 1 to K
for m = 1 to M
wmn
?kv
for n = 1 to Nm
for v = 1 to V
for m = 1 to M
Figure 1: LDA: The topic distribution for
for k =the
1 to K Diricheach
(?v ) has
as smoother
y
zmn
? word?m
let distribution with a parameter ? (independent of the word).
wmn
1.1
for m = 1 to M
?kv
for n = 1 to Nm
Related work
wmn
?
?kv
?kv
for n = 1 to Nm
?kv
?v
for v = 1 to V
2:
xFigure?m
for k = 1we
to K observe
Our
Assume
side
zmn
y
?mExtension:
information ?v (i.e. features) for each word v. The
word-specific smoothing parameters ?kv are gov?m
?kv
?v
?kv
erned by
?v and awmn
common parameter
choice
y.
?v
for m = 1 to M
for n = 1 to Nm
for v = 1 to V
for v = 1 to V
?m
xLoosely
related
works
for k =logistic
1 to K y models to induce structure in generative models are [17],
zmn that use
?m
which proposed a shared logistic normal distribution as a Bayesian prior over probabilistic grammar
weights, and [10], which incorporated features into unsupervised models using locally normalized
?m
wmn to our work
?kv
?v
kv
models.
More related
is?[5],
which
encodes correlations between synonyms, and [1]
for n = 1 to N
for
m
=
1
to
M
for
v
=
1
to
V
which encodes more general correlations. In fact, our proposed model can be seen as a generalisation
of [1], where we can encode the strength of the links between each pair of words.
m
Previous work on multilingual topic models requires parallelism at either the sentence level ([20])
or document level ([9], [15]). More recent work [13] relaxes that, but still requires that a significant
fraction (at least 25%) of the documents are paired up.
Multilingual topic alignment without parallelism was recently proposed by [6]. Their model requires
a list of matched word pairs m (where each pair has one word in each language) and corresponding
matching priors ? that encode the prior knowledge on how likely the match is to occur. The topics
are defined as distributions over word pairs, while the unmatched words come from a unigram
distribution specific to each language. Although their model could be in principle extended to more
than two languages their experimental section was focused on the bilingual case.
One of the key differences between [6] and our method is that we do not hardcode word information, but we use it only as a prior ? this way our method becomes less sensitive to errors in the word
features. Furthermore, our model automatically extends to multiple languages without any modification, aligning topics even for language pairs for which we have no information, as we show in the
experimental section for the Portuguese/French pair. Finally, our model is conceptually simpler and
can be incorporated as a module in existing LDA implementations.
2
The Model
We begin by briefly reviewing the LDA model of [4] as captured in Figure 1. It assumes that
?m ? Dir(?)
zmn ? Mult(?m )
(1a)
(1b)
?k ? Dir(?k |?, y)
(2a)
?k ? Dir(?)
wmn ? Multi(?zmn )
(1c)
(1d)
? ? Logistic(y; ?).
(2b)
Nonparametric extensions in terms of the number of topics can be obtained using Dirichlet
process models [2] regarding the generation of topics. Our extension deals with the word
smoother ?. Instead of treating it as a constant for all words we attempt to infer its values for different words and topics respectively. That is, we assume that (1c) is replaced by
We refer to this setting as downstream conditioning, in analogy to the upstream conditioning of [14]
(which dealt with topical side information over documents). The corresponding graphical model
is given in Figure 2. The above dependency allows us to incorporate features of words as side
information. For instance, if two words (e.g. ?politics? and ?politician?) are very similar then it is
plausible to assume that their topic distributions should also be quite similar. This can be achieved
by choosing similar ?k,politics and ?k,politician . For instance, both of those coefficients might have
great affinity to ?k,scandal and we might estimate y such that this is achieved.
2
2.1
Detailed Description
We now discuss the directed graphical model from Figure 2 in detail. Whenever needed we use the
collapsed representation of the model [8], that is we integrate out the parameters ?m and ?kv such
that we only need to update ? and ? (or indirectly y). We define the standard quantities
!
!
!
nKV
{zmn = k and wmn = v}
nK
nKM
nM
nKM
kv =
k =
km
m =
km
m,n
nKM
km
=
!
n
m
{zmn = k}
nV
v
=
!
k
nKV
kv ,
as well as:
k
Topic distribution p(zmn |?m ): We assume that this is a multinomial distribution specific to
document m, that is p(zmn |?m ) = ?m,zmn .
Conjugate distribution p(?m |?): This is a Dirichlet distribution with parameters ?, where ?k
denotes the smoother for topic k.
Collapsed distribution p(zm |?): Integrating out ?m and using conjugacy yields
"K
?(nKM + ?k ) ? ("?"1 )
,
p(zm |?) = k=1 M km
"K
? (nm + "?"1 )
k=1 ?(?k )
#?
where ? is the gamma function: ?(x) = 0 tx?1 e?t dt.
Word distribution p(wmn |zmn , ?): We assume that given a topic zmn the word wmn is drawn
from a multinomial distribution ?wmn ,zmn . That is p(wmn |zmn , ?) = ?wmn ,zmn . This is entirely
standard as per the basic LDA model.
Conjugate distribution p(?k |?k ): As by default, we assume that ?k is distributed according to a
Dirichlet distribution with parameters ?k . The key difference is that here we do not assume that all
coordinates of ?k are identical. Nor do we assume that all ?k are the same.
Collapsed distribution p(w|z, ?): Integrating out ?k for all topics k yields the following
K "V
$
?(nKV + ?kv ) ? ("?k "1 )
v=1
% K kv
& "V
p(w|z, ?) =
? nk + "?k "1
v=1 ?(?kv )
k=1
2.2
Priors
In order to better control the capacity of our model, we impose a prior on naturally related words,
e.g. the (?Toyota?, ?Kia?) and the (?Bush?, ?Cheney?) tuples, rather than generally related words.
For this purpose we design a similarity graph G(V, E) with words represented as vertices V and
similarity edge weights ?uv between vertices u, v ? V whenever u is related to v. In particular, the
magnitude of ?uv can denote the similarity between words u and v.
In the following we denote by ykv the topic dependent smoothing coefficients for a given word v
and topic k. We impose the smoother
?
?
!
?1 ? !
log ?kv = ykv + yv and log p(?) = 2
?v,v! (ykv ? ykv! )2 +
yv2 ?
2?
!
v
v,v ,k
where log p(?) is given up to an additive constant and yv allows for multiplicative topic-unspecific
corrections. A similar model was used by [3] to capture temporal dependence between topic models computed at different time instances, e.g. when dealing with topic drift over several years in a
scientific journal. There the vertices are words at a given time and the edges are between smoothers
instantiated at subsequent years.
3
Inference
In analogy to the collapsed sampler of [8] we also represent the model in a collapsed fashion. That
is, we integrate out the random variables ?m (the document topic distributions) and ?kv (the topic
3
word distributions), which leads to a joint likelihood in terms of the actual words wmn , the side
information ? about words, the latent variable y, the smoothing hyperprior ?kv , and finally, the
topic assignments zmn .
3.1
Document Likelihood
The likelihood contains two terms: a word-dependent term which can be computed on the fly while
resampling data1 , and a model-dependent term involving the topic counts and the word-topic counts
which can be computed by one pass through the aggregate tables respectively. Let us first write out
the uncollapsed likelihood in terms of z, ?, ?, ?, ?. We have
M N
M
K
m
$
$
$
$
p(w, z, ?, ?|?, ?) =
p(wmn |zmn , ?)p(zmn |?m )
p(?m |?)
p(?k |?)
m=1 n=1
m=1
Define ?
? := "?"1 and ??k := "?k "1 . Integrating out ? and ? yields
K
M
$ ?(?k + nKM ) $
$
?(??k )
?(?
?)
km
p(w, z|?, ?) =
M
?(?
? + nm ) KM
?(?k )
?(??k + nK
k)
m=1
k=1
k:nkm #=0
k=1
$ ?(?kv + nKV )
kv
?(?
)
kv
KV
v:nkv #=0
The above product is obtained simply by canceling out terms in denominator and numerator where
the counts vanish. This is computationally significant, since it allows us to evaluate the normalization
for sparse count tables with cost linear in the number of nonzero coefficients rather than cost in the
dense count table.
3.2
Collapsed Sampler
In order to perform inference we need two components: a sampler which is able to draw from
p(zi = k|w, z?i , ?, ?)2 , and an estimation procedure for (?, y). The sampler is essentially the same
as in standard LDA. For the count variables nKM , nKV , nK and nM we denote by the subscript ???
their values after the word wmn and associated topic zmn have been removed from the statistics.
Standard calculations yield the following topic probability for resampling:
+
, + KM
,
?kv + nKV
kvmn ? nkm? + ?k
(6)
p(zmn = k|rest) ?
?
nK
k? + ?k
In the appendix we detail how to addapt the sampler of [19] to obtain faster sampling.
3.3
Topic Smoother for ?
Optimizing over y is considerably hard since the log-likelihood does not decompose efficiently. This
is due to the dependence of ??k on all words in the dictionary. The data-dependent contribution to
the negative log-likelihood is
K
K
!
! +
+
, !
,
?k ) +
L? =
log ?(??k + nK
)
?
log
?(
?
log ?(?kv ) ? log ?(?kv + nKV
k
kv )
k=1
k=1 v:nKV
kv #=0
with gradients given by the appropriate derivatives of the ? function. We use the prior from section
2.2, which smooths between closely related words only. After choosing edges ?uv according to
these matching words, we obtain an optimisation problem directly in terms of the variables ykv and
yv . Denote by N (v) the neighbours for word v in G(V, E), and ?(x) := ?x log ?(x) the Digamma
function. We have
1 !
?
?ykv [L? ? log p(?)] = 2
?v,v! [ykv ? ykv! ] + ?kv ?(??k + nK
k ) ? ?(?k ) +
? !
v ?N (v)
. KV
/+
,0
+ nkv > 0 ?(?kv ) ? ?(?kv + nKV
.
kv )
The gradient with respect to yk is analogous.
1
Note that this is not entirely correct ? the model changes slightly during one resampling pass, hence the
log-likelihood that we compute is effectively the averaged log-likelihood due to an ongoing sampler. For a
correct computation we would need to perform one pass through the data without resampling. Since this is
wasteful, we choose the approximation instead.
2
Here zi denotes the topic of word i, and z?i the topics of all words in the corpus except for i.
4
4
Experiments
To demonstrate the usefulness of our model we applied it to a multi-lingual document collection,
where we can show a substantial improvement over the standard LDA model on the coordination
between topics of different languages.
4.1
Dataset
Since our goal is to compare topic distributions on different languages we used a parallel corpus
[11] with the proceedings of the European Parliament in 11 languages. We focused on two language
pairs: English/French and English/Portuguese.
Note that a parallel corpus is not necessary for the application of the proposed model ? it is being
used here only because it allows us to properly evaluate the effectiveness of our model.3
We treated the transcript of each speaker in each session as a document, since different speakers
usually talk about different topics. We randomly sampled 1000 documents from each language,
removed infrequent4 and frequent5 words and kept only the documents with at least 20 words. Finally, we removed all documents that lost their corresponding translations in this process. After this
preprocessing we were left with 2415 documents, 805 in each language, and a vocabulary size of
23883 words.
4.2
Baselines
We compared our model to standard LDA, learning ? and ?, both asymmetric6 .
4.3
Prior
We imposed the graph based prior mentioned in Section 2.2. To build our similarity graph we used
the English-French and English-Portuguese dictionaries from http://wiki.webz.cz/dict/,
augmented with translations from Google Translate for the most frequent words in our dataset. As
described earlier, each word corresponds to a vertex, with an edge7 whenever two words match in
the dictionary.
In our model ? = exp(ykv + yv ), so we want to keep both ykv and yv reasonably low to avoid
numerical problems, as a large value of either would lead to overflows. We ensure that by setting ?,
the standard deviation of their prior, fixed to one in all experiments. We did the same for the standard
LDA model, where to learn an asymmetric beta we simply removed ykv to get ? = exp(yv ).
4.4
Methodology
In our experiments we used all the English documents and a subset of the French and Portuguese
ones ? this is what we have in a real application, when we try to learn a topic model from web pages:
the number of pages is English is far greater than in any other language.
We compared three approaches. First, we run the standard LDA model with all documents mixed
together ? this is one of our baselines, which we call STD1.
Next we run our proposed model, but with a slight modification to the setup: in the first half of the
iterations of the Gibbs sampler we include only English documents; in the second half we add the
French and Portuguese ones to the mix.8
3
To emphasise this point, later in this section we show experiments with non-parallel corpora, in which case
we have to rely on visual inspection to assess the outcomes.
4
Words that occurred less than 3 times in the corpus.
5
Words that occurred more than M/10 times in the corpus, where M is the total number of documents.
6
That is, we don?t assume all coordinates of ? and ? are identical.
7
All edges have a fixed weight of one in this case.
8
We need to start with only one language so that an initial topic-word distribution is built; once that is done
the priors are learned and can be used to guide the topic-word distributions in other languages.
5
Finally, as a control experiment we run the standard LDA model in this same setting: first English
documents, then all languages mixed. We call this STD2.
In all experiments we run the Gibbs sampler for a total of 3000 iterations, with the number of topics
fixed to 20, and keep the last sample. After a burn-in of 500 iterations, the optimisation over the word
smoothing coefficients is done every 100 iterations, using an off-the-shelf L-BFGS [12] optimizer.9 .
We repeat every experiment 5 times with different randomisations.
4.5
Evaluation
Evaluation of topic models is an open problem ? recent work [7] suggests that popular measures
based on held-out likelihood, such as perplexity, do not capture whether topics are coherent or
not. Furthermore, we need a set of measures that can assess whether or not we improved over
the standard LDA model w.r.t. our goal ? to synchronize topics across different languages ? and
there?s no reason to believe that likelihood measures would assess that: a model where topics are
synchronized across languages is not necessarily more likely than a model that is not synchronized.
Therefore, to evaluate our model we compare the topic distributions of each English document with
its corresponding French pair (and analogously for the other combinations: English/Portuguese and
French/Portuguese), with these metrics:
Mean )2 distance:
1
1
|L1 |
d1 ?L1 ,d2 =F (d1 )
2
1K k=1
?kd1
?
?kd2
02 3 12
where L1 denotes the set of documents in the first language, F a mapping from a document
in the first language to its corresponding translation in the second language and ?d the topic
distribution of document d.
24
4 32
1
1K
d1
1
Mean Hellinger distance: |L1 | d1 ?L1 ,d2 =F (d1 ) k=1
?k ? ?kd2
1
Agreements on first topic: |L11 | d1 ?L1 ,d2 =F (d1 ) I(argmaxk ?kd1 , argmaxk ?kd2 ))
where I is the indicator function ? that is, the proportion of document pairs where the most
likely topic is the same for both languages.
1
Mean number of agreements in top 5 topics: |L11 | d1 ?L1 ,d2 =F (d1 ) agreements(d1 , d2 )
where agreements(d1 , d2 ) is the cardinality of the intersection of the 5 most likely topics
of d1 and d2 .
4.6
Results
In Figure 3 we compare our method (DC) to the standard LDA model (STD1 and STD2, see section
4.4), for the English-French pair10 . In all metrics our proposed model shows a substantial improvement over the standard LDA model.
In Figures 4 and 5 we do the same for the English-Portuguese and Portuguese-French pairs, respectively, with similar results. Note that we did not use a Portuguese-French dictionary in any
experiment.
In Figure 6 we plot the word smoothing prior for the English word democracy and its French and
Portuguese translations, d?emocratie and democracia, for both the standard LDA model (STD1) and
our model (DC), with 20% of the French and Portuguese documents used in training. In STD1
we don?t have topic-specific priors (hence the horizontal line) and the word democracy has a much
higher prior, because it happens more often in the dataset (since we have all English documents and
only 20% of the French and Portuguese ones). In DC, however, the priors are topic-specific and
quite similar, as this is enforced by the similarity graph.
9
10
http://www.chokkan.org/software/liblbfgs
See the Appendix for run times.
6
To emphasize that we do not need a parallel corpus we ran a second experiment where we selected
the same number of documents of each language, but assuring that for each document its corresponding translations are not in the dataset, and trained our model (DC) with 100 topics. This could
be done with any multilingual corpus, since no parallelization is required. In this case, however, we
cannot compute the distance metrics as before, since we have no information on the actual topic distributions of the documents. The best we can hope to do is to visually inspect the most likely words
for the learned topics. This is shown in Table 1, for some selected topics, where the synchronization
amongst the different languages is clear.
Mean l2?distance
Mean Hellinger distance
1
% agreements on first topic
2
1.2
STD1
STD2
DC
Mean no. agreements in top 5 topics
80
STD1
STD2
DC
1.5
3
0.8
2
1
STD1
STD2
DC
40
0.6
0.5
0.4
0.2
0
STD1
STD2
DC
2.5
60
5
10
15
% of French documents
20
1.5
20
0
0
5
10
15
% of French documents
20
1
0
0
0.5
20 0
5
10
15
% of French documents
5
10
15
% of French documents
20
Figure 3: Comparison of topic distributions in English and French documents. See text for details.
Mean l2?distance
Mean Hellinger distance
1
% agreements on first topic
2
1.2
STD1
STD2
DC
3.5
50
3
STD1
STD2
DC
1.5
40
0.8
1
2.5
STD1
STD2
DC
30
0.6
2
20
0.5
0.4
0.2
0
Mean no. agreements in top 5 topics
60
5
10
15
% of Portuguese documents
20
STD1
STD2
DC
0
0
5
10
15
% of Portuguese documents
1.5
10
20
0
0
1
0.5
20 0
5
10
15
% of Portuguese documents
5
10
15
% of Portuguese documents
20
Figure 4: Comparison of topic distributions in English and Portuguese documents. See text.
Mean l2?distance
Mean Hellinger distance
1.5
% agreements on first topic
2
80
1.5
60
1
40
Mean no. agreements in top 5 topics
3.5
STD1
STD2
DC
3
1
2.5
STD1
STD2
DC
2
0.5
1.5
STD1
STD2
DC
0
0
5
10
15
20
% of Portuguese/French documents
0.5
0
0
20
STD1
STD2
DC
1
5
10
15
20
% of Portuguese/French documents
0
0
5
10
15
% of Portuguese/French documents
0.5
20 0
5
10
15
20
% of Portuguese/French documents
Figure 5: Comparison of topic distributions in Portuguese and French documents. See text.
STD1
DC
?0.3
15
democracy
democracia
d?mocratie
?0.4
10
democracy
democracia
d?mocratie
?0.6
log(?)
log(?)
?0.5
5
?0.7
?0.8
0
?0.9
?1
0
5
10
topic
15
?5
0
20
5
10
topic
15
20
Figure 6: Word smoothing prior for two words in the standard LDA and in our model. The x-axis is
the index to the topic. See text for details.
5
Extensions: Other Features
Although we have implemented a specific type of feature encoding for the words, our model admits
a large range of applications through a suitable choice of features. In the following we discuss a
number of them in greater detail.
7
Table 1: Top 10 words for some of the learned topics (from top to bottom, respectively, topics 8, 17,
20, 32, 49). Words are colored according to their language ? English, Portuguese or French ? except
when ambiguous (e.g., information is a word in both French and English). See text for details.
amendments, alterac?o? es, amendment, amendements, alterac?a? o, use, substances, r`eglement, l?amendement, accept
e? lections, electoral, elections, d?eput?es, eleic?o? es, partis, proportional, eleitoral, transnational, scrutin
informac?a? o, information, regi?oes, soci?et?e, l?information, acesso, aeroplanes, prix, r?egions, comunicac?a? o
stability, coordination, estabilidade, central, coordenac?a? o, plans, objectivo, stabilit?e, ue, list
monnaie, consumers, consumidores, consommateurs, l?euro, crois, s?agit, moeda, pouvoir, currency
5.1
Single Language
Distributional Similarity: The basic idea is that words are similar if they occur in a similar context
[16]. Hence, one could build a graph as outlined in Section 2.2 with edges only between words
which exceed a level of proximity.
Lexical Similarity: For interpolation between words one could use a distribution over substrings
of a word as the feature map. This is essentially what is proposed by [18]. Such lexical similarity
makes the sampler less sensitive to issues such as stemming: after all, two words which reduce to
the same stem will also have a high lexical similarity score, hence the estimated ?kv will yield very
similar topic assignments.
Synonyms and Thesauri: Given a list of synonyms it is reasonable to assume that they belong
to related topics. This can be achieved by adding edges between a word and all of its synonyms.
Since in our framework we only use this information to shape a prior, errors in the synonym list and
multiple meanings of a word will not prove fatal.
5.2
Multiple Languages
Lexical Similarity: Similar considerations apply for inter-lingual topic models. It is reasonable to
assume that lexical similarity generally points to similarity in meaning. Using such features should
allow one to synchronise topics even in the absence of dictionaries. However, it is important that
similarities are not hardcoded but only imposed as a prior on the topic distribution (e.g., ?gift? has
different meanings in English and German).
6
Discussion
In this paper we described a simple yet general formalism for incorporating word features into LDA,
which among other things allows us to synchronise topics across different languages. We performed
a number of experiments in the multiple-language setting, in which the goal was to show that our
model is able to incorporate dictionary information in order to improve topic alignment across different languages. Our experimental results reveal substantial improvement over the LDA model in
the quality of topic alignment, as measured by several metrics, and in particular we obtain much
improved topic alignment even across languages for which a dictionary is not used (as described in
the Portuguese/French plots, see Figure 5). We also showed that the algorithm is quite effective even
in the absence of documents that are explicitly denoted as being aligned (see Table 1). This sets it
apart from [13], which requires that a significant fraction (at least 25%) of documents are paired up.
Also, the model is not limited to lexical features. Instead, we could for instance also exploit syntactical information such as parse trees. For instance, noun / verb disambiguation or named entity
recognition are all useful in determining the meaning of words and therefore it is quite likely that
they will also aid in obtaining an improved topical mixture model.
Acknowledgements
NICTA is funded by the Australian Government as represented by the Department of Broadband,
Communications and the Digital Economy and the Australian Research Council through the ICT
Centre of Excellence program.
8
References
[1] David Andrzejewski, Xiaojin Zhu, and Mark Craven. Incorporating domain knowledge into
topic modeling via Dirichlet Forest priors. In ICML, pages 1?8. ACM Press, 2009.
[2] C. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric
problems. Annals of Statistics, 2:1152?1174, 1974.
[3] David M. Blei and John D. Lafferty. Dynamic topic models. In W. W. Cohen and A. Moore,
editors, ICML, volume 148, pages 113?120. ACM, 2006.
[4] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of
Machine Learning Research, 3:993?1022, January 2003.
[5] Jordan Boyd-Graber, David Blei, and Xiaojin Zhu. A Topic Model for Word Sense Disambiguation. In EMNLP-CoNLL, pages 1024?1033, 2007.
[6] Jordan Boyd-Graber and David M. Blei. Multilingual topic models for unaligned text. In
Proceedings of the 25th Conference in Uncertainty in Artificial Intelligence (UAI), 2009.
[7] Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David Blei. Reading
tea leaves: How humans interpret topic models. In Y. Bengio, D. Schuurmans, J. Lafferty,
C. K. I. Williams, and A. Culotta, editors, NIPS, pages 288?296. 2009.
[8] Thomas L. Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the National
Academy of Sciences, 101:5228?5235, 2004.
[9] Woosung Kim and Sanjeev Khudanpur. Lexical triggers and latent semantic analysis for
crosslingual language model adaptation. ACM Transactions on Asian Language Information
Processing, 3, 2004.
[10] T.B. Kirkpatrick, A.B. C?ot?e, J. DeNero, and Dan Klein. Painless Unsupervised Learning with
Features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2010.
[11] Philipp Koehn. Europarl: A parallel corpus for statistical machine translation. In Machine
Translation Summit X, pages 79?86, 2005.
[12] Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3):503?528, 1989.
[13] David Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. Polylingual topic models. In Proceedings of the 2009 Conference on Empirical Methods
in Natural Language Processing, pages 880?889, Singapore, August 2009. ACL.
[14] David M. Mimno and Andrew McCallum. Topic models conditioned on arbitrary features
with dirichlet-multinomial regression. In D. A. McAllester and P. Myllym?aki, editors, UAI,
Proceedings of the 24th Conference in Uncertainty in Artificial Intelligence, pages 411?418.
AUAI Press, 2008.
[15] Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. Mining multilingual topics from
wikipedia. In 18th International World Wide Web Conference, pages 1155?1155, April 2009.
[16] Patrick Pantel and Dekang Lin. Discovering word senses from text. In David Hand, Daniel
Keim, and Raymond Ng, editors, Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 613?619, New York, July 2002.
ACM Press.
[17] Noah A Smith and Shay B Cohen. The Shared Logistic Normal Distribution for Grammar Induction. In NIPS Workshop on Speech and Language: Unsupervised Latent-Variable Models,,
pages 1?4, 2008.
[18] S. V. N. Vishwanathan and A. J. Smola. Fast kernels for string and tree matching. In S. Becker,
S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15,
pages 569?576. MIT Press, Cambridge, MA, 2003.
[19] Limin Yao, David Mimno, and Andrew McCallum. Efficient methods for topic model inference on streaming document collections. In KDD?09, 2009.
[20] Bing Zhao and Eric P. Xing. BiTAM: Bilingual Topic AdMixture Models for Word Alignment.
In In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics
(ACL?06), 2006.
9
| 4094 |@word briefly:1 proportion:1 open:1 d2:7 km:7 hu:1 pick:1 initial:1 liu:1 contains:1 score:1 daniel:1 document:46 existing:2 clara:1 yet:1 portuguese:25 john:1 stemming:1 subsequent:1 additive:1 numerical:1 kdd:1 shape:1 treating:1 plot:2 update:1 resampling:4 generative:1 half:2 selected:2 intelligence:2 leaf:1 discovering:1 inspection:1 mccallum:3 smith:2 short:1 colored:1 blei:5 cheney:1 philipp:1 org:1 simpler:1 mathematical:1 bitam:1 beta:1 prove:1 dan:1 hellinger:4 excellence:1 inter:1 nor:1 multi:4 globally:1 automatically:1 researched:1 gov:1 actual:2 election:1 cardinality:1 becomes:1 begin:1 gift:1 matched:1 ykv:11 what:2 substantially:2 string:1 finding:1 synchronisation:2 temporal:1 every:2 act:1 auai:1 control:2 before:1 treat:1 dekang:1 encoding:2 subscript:1 interpolation:1 might:2 burn:1 acl:2 wallach:1 suggests:1 limited:2 range:1 averaged:1 directed:1 lost:1 procedure:1 empirical:1 mult:1 matching:3 boyd:3 word:86 induce:1 integrating:3 griffith:1 get:1 cannot:1 context:2 collapsed:8 www:1 imposed:2 map:1 lexical:7 williams:1 focused:2 assigns:1 steyvers:1 stability:1 coordinate:2 analogous:1 annals:1 trigger:1 assuring:1 soci:1 programming:1 agreement:10 infrequently:2 democracy:5 recognition:1 asymmetric:1 summit:1 distributional:2 bottom:1 module:1 fly:1 solved:1 capture:2 wang:1 culotta:1 sun:1 removed:4 yk:1 substantial:3 mentioned:1 ran:1 ideally:1 dynamic:1 trained:1 reviewing:1 eric:1 joint:1 various:1 tx:1 represented:2 talk:1 chapter:1 instantiated:1 fast:1 effective:1 artificial:2 aggregate:1 choosing:2 outcome:1 quite:5 modular:1 koehn:1 plausible:2 grammar:2 ability:2 statistic:2 fatal:1 advantage:1 propose:1 unaligned:1 product:1 zm:2 frequent:1 adaptation:1 aligned:3 translate:1 academy:1 pantel:1 description:1 kv:35 uncollapsed:1 andrew:4 measured:1 transcript:1 implemented:1 indicate:1 come:1 synchronized:2 australian:2 denero:1 closely:1 correct:2 human:2 australia:1 mcallester:1 government:1 assign:1 tiberio:1 decompose:1 extension:4 correction:1 proximity:1 normal:2 visually:1 exp:2 great:1 mapping:1 hardcoded:1 dictionary:12 achieves:1 cohesion:3 optimizer:1 purpose:1 estimation:1 lose:1 coordination:2 sensitive:2 edit:1 council:1 successfully:1 nkm:8 hope:1 mit:1 clearly:1 rather:4 avoid:1 shelf:1 unspecific:1 encode:2 improvement:3 consistently:1 properly:1 likelihood:10 political:1 sigkdd:1 digamma:1 baseline:2 sense:1 kim:1 inference:3 economy:1 dependent:4 streaming:1 accept:1 germany:1 tao:1 issue:1 among:1 denoted:1 yahoo:2 retaining:1 plan:1 smoothing:9 noun:1 once:1 ng:2 sampling:1 identical:2 kd2:3 unsupervised:3 icml:2 bangalore:1 randomly:1 neighbour:1 gamma:1 national:1 asian:1 replaced:1 attempt:1 kd1:2 mining:2 zheng:1 evaluation:2 adjust:2 alignment:6 chong:1 kirkpatrick:1 mixture:2 sens:1 held:1 edge:6 capable:1 necessary:1 tree:2 hyperprior:1 politician:3 instance:6 formalism:1 earlier:1 modeling:1 assignment:2 tractability:1 cost:2 vertex:4 deviation:1 rare:2 subset:1 usefulness:1 dependency:1 dir:3 considerably:1 international:2 probabilistic:1 off:1 dong:1 michael:1 together:1 analogously:1 yao:1 sanjeev:1 central:1 nm:9 choose:1 emnlp:1 unmatched:1 andrzejewski:1 american:1 derivative:1 zhao:1 bfgs:2 north:1 coefficient:6 explicitly:2 multiplicative:1 try:1 jason:1 later:1 performed:1 shravan:1 doing:1 yv:6 start:1 xing:1 capability:1 parallel:5 contribution:1 ass:3 ni:1 amendment:2 efficiently:1 yield:5 conceptually:1 dealt:1 bayesian:2 wray:1 substring:1 whenever:3 canceling:1 nonetheless:1 james:1 obvious:1 naturally:1 associated:2 sampled:1 dataset:4 popular:1 knowledge:3 improves:2 sean:1 actually:1 oes:1 higher:1 dt:1 methodology:1 improved:5 april:1 done:3 furthermore:2 smola:1 correlation:3 parlance:1 hand:1 horizontal:1 web:2 parse:1 google:1 french:26 logistic:4 quality:2 lda:22 perhaps:1 scientific:2 believe:1 reveal:1 usa:1 normalized:1 hence:4 nonzero:1 moore:1 semantic:1 deal:2 numerator:1 during:1 ue:1 aki:1 ambiguous:1 speaker:2 demonstrate:1 l1:7 meaning:4 consideration:1 recently:1 common:2 data1:1 wikipedia:1 multinomial:4 cohen:2 conditioning:2 volume:1 crosslingual:1 extend:1 belong:2 slight:1 occurred:2 interpret:1 association:2 significant:3 refer:1 cambridge:1 gibbs:5 uv:3 outlined:1 similarly:1 session:1 centre:1 language:39 yv2:1 funded:1 similarity:16 add:1 aligning:1 patrick:1 recent:2 showed:1 optimizing:1 electoral:1 apart:1 perplexity:1 jorge:1 meeting:1 xiaochuan:1 seen:1 captured:1 greater:2 impose:2 july:1 smoother:7 currency:1 full:1 multiple:4 reduces:1 infer:1 mix:1 smooth:2 stem:1 match:2 faster:1 calculation:1 lin:1 paired:2 involving:1 basic:2 regression:1 denominator:1 optimisation:3 essentially:2 metric:4 iteration:4 represent:1 normalization:1 cz:1 kernel:1 achieved:3 want:1 jian:2 parallelization:1 rest:1 ot:1 nv:1 thing:2 leveraging:1 lafferty:2 effectiveness:1 jordan:4 call:2 leverage:3 exceed:1 bengio:1 relaxes:1 variety:1 zi:2 reduce:1 regarding:1 idea:1 politics:3 whether:2 syntactical:1 aeroplane:1 becker:1 speech:1 york:1 generally:2 useful:1 santa:1 clear:2 detailed:1 nonparametric:2 locally:1 http:2 wiki:1 singapore:1 estimated:2 per:1 klein:1 write:1 tea:1 key:2 keim:1 drawn:1 wasteful:1 kept:1 nocedal:1 graph:5 downstream:1 fraction:2 year:2 enforced:1 run:5 uncertainty:2 scandal:1 wmn:15 named:1 extends:1 reasonable:2 thesaurus:3 draw:1 coherence:1 appendix:2 disambiguation:2 conll:1 entirely:3 annual:2 zmn:23 strength:2 occur:2 naradowsky:1 constraint:1 noah:1 alex:1 vishwanathan:1 software:1 encodes:2 generates:1 aspect:1 randomisation:1 performing:1 department:1 according:4 combination:1 craven:1 conjugate:2 lingual:2 across:7 slightly:1 modification:3 happens:1 computationally:1 conjugacy:1 bing:1 discus:2 german:2 count:6 needed:1 know:1 apply:1 observe:2 indirectly:1 appropriate:1 thomas:1 assumes:1 dirichlet:10 denotes:3 ensure:1 include:1 graphical:2 top:6 linguistics:2 emphasised:1 exploit:1 especially:1 build:2 overflow:1 added:1 quantity:1 dependence:2 obermayer:1 affinity:1 gradient:2 amongst:1 distance:10 unable:1 link:1 thrun:1 capacity:1 entity:1 topic:103 reason:1 nicta:2 induction:1 consumer:1 index:1 setup:1 unfortunately:1 negative:1 implementation:1 design:1 perform:3 allowing:1 l11:2 inspect:1 january:1 extended:1 incorporated:2 communication:1 topical:2 dc:17 verb:1 august:1 arbitrary:1 drift:1 david:11 pair:10 required:1 sentence:1 coherent:1 learned:3 boost:2 nip:2 able:3 usually:2 parallelism:2 eighth:1 reading:1 program:1 built:1 memory:1 dict:1 suitable:1 treated:1 hybrid:1 rely:1 synchronize:1 indicator:1 natural:1 zhu:2 improve:1 technology:1 axis:1 admixture:1 argmaxk:2 xiaojin:2 raymond:1 text:8 prior:22 regi:1 l2:3 acknowledgement:1 ict:1 discovery:1 determining:1 synchronization:1 fully:1 mixed:2 generation:1 allocation:4 proportional:1 analogy:2 digital:1 integrate:2 shay:1 caetano1:1 principle:1 parliament:1 editor:5 translation:7 kia:1 surprisingly:1 last:1 repeat:1 english:19 side:6 bias:1 guide:1 allow:1 india:1 fall:1 wide:1 sparse:1 emphasise:1 distributed:2 mimno:3 default:2 vocabulary:1 world:1 ignores:1 collection:4 preprocessing:1 far:2 transaction:1 emphasize:1 multilingual:5 unreliable:1 dealing:1 keep:2 uai:2 corpus:10 tuples:1 don:2 latent:7 table:6 learn:2 reasonably:1 ca:1 obtaining:1 forest:1 schuurmans:1 hanna:1 upstream:1 european:1 necessarily:1 domain:1 did:2 dense:1 synonym:7 bilingual:2 myllym:1 graber:3 augmented:1 canberra:1 euro:1 broadband:1 fashion:1 aid:1 explicit:1 vanish:1 toyota:1 synchronise:2 unigram:1 specific:6 substance:1 list:4 admits:1 evidence:1 incorporating:2 workshop:1 adding:1 effectively:1 magnitude:1 conditioned:1 occurring:2 anu:1 painless:1 nk:7 chen:1 intersection:1 simply:2 likely:6 antoniak:1 visual:1 khudanpur:1 limin:1 chang:1 corresponds:1 gerrish:1 acm:5 ma:1 goal:3 towards:1 shared:2 absence:2 hard:1 change:1 generalisation:1 except:2 sampler:12 total:2 pas:3 tendency:1 experimental:4 e:3 mark:2 inability:1 jonathan:1 bush:1 ongoing:1 incorporate:3 evaluate:3 d1:12 europarl:1 |
3,417 | 4,095 | Learning Multiple Tasks with a Sparse
Matrix-Normal Penalty
Yi Zhang
Machine Learning Department
Carnegie Mellon University
[email protected]
Jeff Schneider
The Robotics Institute
Carnegie Mellon University
[email protected]
Abstract
In this paper, we propose a matrix-variate normal penalty with sparse inverse covariances to couple multiple tasks. Learning multiple (parametric) models can be
viewed as estimating a matrix of parameters, where rows and columns of the matrix correspond to tasks and features, respectively. Following the matrix-variate
normal density, we design a penalty that decomposes the full covariance of matrix
elements into the Kronecker product of row covariance and column covariance,
which characterizes both task relatedness and feature representation. Several recently proposed methods are variants of the special cases of this formulation. To
address the overfitting issue and select meaningful task and feature structures,
we include sparse covariance selection into our matrix-normal regularization via
?1 penalties on task and feature inverse covariances. We empirically study the
proposed method and compare with related models in two real-world problems:
detecting landmines in multiple fields and recognizing faces between different
subjects. Experimental results show that the proposed framework provides an effective and flexible way to model various different structures of multiple tasks.
1 Introduction
Learning multiple tasks has been studied for more than a decade [6, 24, 11]. Research in the following two directions has drawn considerable interest: learning a common feature representation
shared by tasks [1, 12, 30, 2, 3, 9, 23], and directly inferring the relatedness of tasks [4, 26, 21, 29].
Both have a natural interpretation if we view learning multiple tasks as estimating a matrix of model
parameters, where the rows and columns correspond to tasks and features. From this perspective,
learning the feature structure corresponds to discovering the structure of the columns in the parameter matrix, and modeling the task relatedness aims to find and utilize the relations among rows.
Regularization methods have shown promising results in finding either feature or task structure [1, 2, 12, 21]. In this paper we propose a new regularization approach and show how several
previous approaches are variants of special cases of it. The key contribution is a matrix-normal
penalty with sparse inverse covariances, which provides a framework for characterizing and coupling the model parameters of related tasks. Following the matrix normal density, we design a
penalty that decomposes the full covariance of matrix elements into the Kronecker product of row
and column covariances, which correspond to task and feature structures in multi-task learning. To
address overfitting and select task and feature structures, we incorporate sparse covariance selection
techniques into our matrix-normal regularization framework via ?1 penalties on task and feature inverse covariances. We compare the proposed method to related models on two real-world data sets:
detecting landmines in multiple fields and recognizing faces between different subjects.
1
2 Related Work
Multi-task learning has been an active research area for more than a decade [6, 24, 11]. For joint
learning of multiple tasks, connections need to be established to couple related tasks. One direction
is to find a common feature structure shared by tasks. Along this direction, researchers proposed to
infer task structure via principal components [1, 12], independent components [30] and covariance
[2, 3] in the parameter space, to select a common subset of features [9, 23], as well as to use shared
hidden nodes in neural networks [6, 11]. Specifically, learning a shared feature covariance for model
parameters [2] is a special case of our proposed framework. On the other hand, assuming models
of all tasks are equally similar is risky. Researchers recently began exploring methods to infer the
relatedness of tasks. These efforts include using mixtures of Gaussians [4] or Dirichlet processes
[26] to model task groups, encouraging clustering of tasks via a convex regularization penalty [21],
identifying ?outlier? tasks by robust t-processes [29], and inferring task similarity from task-specific
features [8, 27, 28]. The present paper uses the matrix normal density and ?1-regularized sparse
covariance selection to specify a structured penalty, which provides a systematic way to characterize
and select both task and feature structures in multiple parametric models.
Matrix normal distributions have been studied in probability and statistics for several decades [13,
16, 18] and applied to predictive modeling in the Bayesian literature. For example, the standard
matrix normal can serve as a prior for Bayesian variable selection in multivariate regression [9],
where MCMC is used for sampling from the resulting posterior. Recently, matrix normal distributions have also been used in nonparametric Bayesian approaches, especially in learning Gaussian
Processes (GPs) for multi-output prediction [7] and collaborative filtering [27, 28]. In this case, the
covariance function of the GP prior is decomposed as the Kronecker product of a covariance over
functions and a covariance over examples. We note that the proposed matrix-normal penalty with
sparse inverse covariances in this paper can also be viewed as a new matrix-variate prior, upon which
Bayesian inference can be performed. We will pursue this direction in our future work.
3 Matrix-Variate Normal Distributions
3.1 Definition
The matrix-variate normal distribution is one of the most widely studied matrix-variate distributions
[18, 13, 16]. Consider an m ? p matrix W. Since we can vectorize W to be a mp ? 1 vector,
the normal distribution on a matrix W can be considered as a multivariate normal distribution on a
vector of mp dimensions. However, such an ordinary multivariate distribution ignores the special
structure of W as an m ? p matrix, and as a result, the covariance characterizing the elements of
W is of size mp ? mp. This size is usually prohibitive for modeling and estimation. To utilize the
structure of W, matrix normal distributions assume that the mp?mp covariance can be decomposed
as the Kronecker product ? ? ?, and elements of W follow:
V ec(W) ? N (V ec(M), ? ? ?)
(1)
where ? is an m ? m positive definite matrix indicating the covariance between rows of W, ? is
a p ? p positive definite matrix indicating the covariance between columns of W, ? ? ? is the
Kronecker product of ? and ?, M is a m ? p matrix containing the expectation of each element of
W, and V ec is the vectorization operation which maps a m ? p matrix into a mp ? 1 vector. Due to
the decomposition of covariance as the Kronecker product, the matrix-variate normal distribution of
an m ? p matrix W, parameterized by the mean M, row covariance ? and column covariance ?,
has a compact log-density [18]:
mp
p
m
1
log P (W) = ?
log(2?) ? log(|?|) ?
log(|?|) ? tr{??1 (W ? M)??1 (W ? M)T } (2)
2
2
2
2
where | | is the determinant of a square matrix, and tr{} is the trace of a square matrix.
3.2 Maximum likelihood estimation (MLE)
Consider a set of n samples {Wi }ni=1 where each Wi is a m?p matrix generated by a matrix-variate
normal distribution as eq. (2). The maximum likelihood estimation (MLE) of mean M is [16]:
n
X
? = 1
Wi
(3)
M
n i=1
2
The MLE estimators of ? and ? are solutions to the following system:
Pn
n ?
1
? =
? ? ?1 (Wi ? M)
? T
np Pi=1 (Wi ? M)?
n
1
? =
? T ? ?1 (Wi ? M)
?
?
i=1 (Wi ? M) ?
nm
(4)
It is efficient to iteratively solve (4) until convergence, known as the ?flip-flop? algorithm [16].
? and ?
? are not identifiable and solutions for maximizing the log density in eq. (2) are not
Also, ?
unique. If (?? , ?? ) is an MLE estimate for the row and column covariances, for any ? > 0,
(??? , ?1 ?? ) will lead to the same log density and thus is also an MLE estimate. This can be seen
from the definition in eq. (1), where only the Kronecker product ? ? ? is identifiable.
4 Learning Multiple Tasks with a Sparse Matrix-Normal Penalty
Regularization is a principled way to control model complexity [20]. Classical regularization penalties (for single-task learning) can be interpreted as assuming a multivariate prior distribution on the
parameter vector and performing maximum-a-posterior estimation, e.g., ?2 penalty and ?1 penalty
correspond to multivariate Gaussian and Laplacian priors, respectively. For multi-task learning, it is
natural to use matrix-variate priors to design regularization penalties.
In this section, we propose a matrix-normal penalty with sparse inverse covariances for learning
multiple related tasks. In Section 4.1 we start with learning multiple tasks with a matrix-normal
penalty. In Section 4.2 we study how to incorporate sparse covariance selection into our framework
by further imposing ?1 penalties on task and feature inverse covariances. In Section 4.3 we outline
the algorithm, and in Section 4.4 we discuss other useful constraints in our framework.
4.1 Learning with a Matrix Normal Penalty
Consider a multi-task learning problem with m tasks in a p-dimensional feature space. The training
(t) (t) nt
sets are {Dt }m
t=1 , where each set Dt contains nt examples {(xi , yi )}i=1 . We want to learn
m models for the m tasks but appropriately share knowledge among tasks. Model parameters are
represented by an m ? p matrix W, where parameters for a task correspond to a row.
The last term in the matrix-variate normal density (2) provides a structure to couple the parameters
of multiple tasks as a matrix W: 1) we set M = 0, indicating a preference for simple models; 2)
the m ? m row covariance ? describes the similarity among tasks; 3) the p ? p column covariance
matrix ? represents a shared feature structure. This yields the following total loss L to optimize:
L=
nt
m X
X
t=1 i=1
(t)
(t)
L(yi , xi , W(t, :)) + ? tr{??1 W??1 WT }
(t)
(5)
(t)
where ? controls the strength of the regularization, (yi , xi ) is the ith example in the training
set of the tth task, W(t, :) is the parameter vector of the tth task, and L() is a convex empirical
loss function depending on the specific model we use, e.g., squared loss for linear regression, loglikelihood loss for logistic regression, hinge loss for SVMs, and so forth. When ? and ? are known
and positive definite, eq. (5) is convex w.r.t. W and thus W can be optimized efficiently [22].
Now we discuss a few special cases of (5) and how is previous work related to them. When we fix
? = Im and ? = Ip , the penalty term can be decomposed into standard ?2-norm penalties on the
m rows of W. In this case, the m tasks in (5) can be learned almost independently using single-task
?2 regularization (but tasks are still tied by sharing the parameter ?).
When we fix ? = Im , tasks are linked only by a shared feature covariance ?. This corresponds
to a multi-task feature learning framework [2, 3] which optimizes eq. (5) w.r.t. W and ?, with an
additional constraint tr{?} ? 1 on the trace of ? to avoid setting ? to infinity.
When we fix ? = Ip , tasks are coupled only by a task similarity matrix ?. This is used in a
recent clustered multi-task learning formulation [21], which optimizes eq. (5) w.r.t. W and ?, with
additional constraints on the singular values of ? that are motivated and derived from task clustering.
A more recent multi-label classification model [19] essentially optimizes W in eq. (5) with a label
correlation ? given as prior knowledge and empirical loss L as the max-margin hinge loss.
3
We usually do not know task and feature structures in advance. Therefore, we would like to infer ?
and ? in eq. (5). Note that if we jointly optimize W, ? and ? in eq. (5), we will always set ? and
? to be infinity matrices. We can impose constraints on ? and ? to avoid this, but a more natural
way is to further expand eq. (5) to include all relevant terms w.r.t. ? and ? from the matrix normal
log-density (2). As a result, the total loss L is:
L=
nt
m X
X
t=1 i=1
(t)
(t)
L(yi , xi , W(t, :)) + ? [p log |?| + m log |?| + tr{??1 W??1 WT }]
(6)
Based on this formula, we can infer task structure ? and feature structure ? given the model parameters W, as the following problem:
min p log |?| + m log |?| + tr{??1 W??1 WT }
?,?
(7)
This problem is equivalent to maximizing the log-likelihood of a matrix normal distribution as in
eq. (2), given W as observations and expectation M fixed at 0. Following Section 3.2, the MLE of
? and ? can be obtained by the ?flip-flop? algorithm:
n ?
? = 1 W?
? ?1 WT + ?Im
p
(8)
1
?
? ?1 W + ?Ip
? = m WT ?
where ? is a small positive constant to improve numerical stability. As discussed in Section 3.2, only
? and ?
? are only identifiable up to an multiplicative constant. This
? ? ? is uniquely defined, and ?
will not affect the optimization of W using eq. (5), since only ? ? ? matters for this purpose.
4.2 Sparse Covariance Selection in the Matrix-Normal Penalty
Consider the sparsity of ??1 and ??1 . When ? has a sparse inverse, task pairs corresponding to
zero entries in ??1 will not be explicitly coupled in the penalty of (6). Similarly, a zero entry in
??1 indicates no direct interaction between the two corresponding features in the penalty. Also,
note that a clustering of tasks can be expressed by block-wise sparsity of ??1 .
Covariance selection aims to select nonzero entries in the Gaussian inverse covariance and discover
conditional independence between variables (indicated by zero entries in the inverse covariance) [14,
5, 17, 15]. The matrix-normal density in eq. (6) enables us to perform sparse covariance selection to
regularize and select task and feature structures.
Formally, we rewrite (6) to include two additional ?1 penalty terms on the inverse covariances:
L=
nt
m X
X
t=1 i=1
(t)
(t)
L(yi , xi , W(t, :)) + ?[p log |?| + m log |?| + tr{??1 W??1 WT }]
+ ?? ||??1 ||?1 + ?? ||??1 ||?1
(9)
where || ||?1 is the ?1-norm of a matrix, and ?? and ?? control the strength of ?1 penalties and
therefore the sparsity of task and feature structures.
Based on the new regularization formula (9), estimating W given ? and ? as in (5) is not affected,
while inferring ? and ? given W, previously shown as (7), becomes a new problem:
min p log |?| + m log |?| + tr{??1 W??1 WT } +
?,?
??
??
||??1 ||?1 +
||??1 ||?1
?
?
As in (8), we can iteratively optimize ? and ? until convergence, as follows:
n ?
? = argmin? p log |?| + tr{??1 (W??1 WT )} + ??? ||??1 ||?1
? = argmin m log |?| + tr{??1 (WT ?
? ?1 W)} + ?? ||??1 ||?1
?
?
?
(10)
(11)
Note that both equations in (11) are ?1 regularized covariance selection problems, for which efficient
optimization has been intensively studied [5, 17, 15]. For example, we can use graphical lasso [17]
as a basic solver and consider (11) as an ?1 regularized ?flip-flop? algorithm:
n ?
? = glasso( 1 W?
? ?1 WT , ?? )
?
?
p
=
?
1
? ?1 W, ?? )
glasso( m
WT ?
?
4
Finally, an annoying part of eq. (9) is the presence of two additional regularization parameters ??
and ?? . Due to the property of matrix normal distributions that only ? ? ? is identifiable, we can
safely reduce the complexity of choosing regularization parameters by considering the restriction:
?? = ??
(12)
The following lemma proves that restricting ?? and ?? to be equal in eq. (9) will not reduce the
space of optimal models W we can obtain. As a result, we eliminate one regularization parameter.
Lemma 1. Suppose W? belongs to a minimizer (W? , ?? , ?? ) for eq. (9) with some arbitrary
choice of ?, ?? and ?? > 0. Then, W? must also belong to a minimizer for eq. (9) with certain
choice of ?? , ??? and ??? such that ??? = ??? . Proof of lemma 1 is provided in Appendix A.
4.3 The Algorithm
Based on the regularization formula (9), we study the following algorithm to learning multiple tasks:
1) Estimate W by solving (5), using ? = Im and ? = Ip ;
2) Infer ? and ? in (9) (by solving (11) until convergence), using the estimated W from step 1);
3) Estimate W by solving (5), using the inferred ? and ? from step 2).
One can safely iterate over steps 2) and 3) and convergence to a local minimum of eq. (9) is guaranteed. However, we observed that a single pass yields good results1 . Steps 1) and 3) are linear in the
number of data points and step 2) is independent of it, so the method scales well with the number
of samples. Step 2) needs to solve ?1 regularized covariance selection problems as (11). We use the
state of the art technique [17], but more efficient optimization for large covariances is still desirable.
4.4 Additional Constraints
We can have additional structure assumptions in the matrix-normal penalty. For example, consider:
?ii = 1 i = 1, 2, . . . , m
(13)
?jj = 1 j = 1, 2, . . . , p
(14)
In this case, we ignore variances and restrict our attention to correlation structures. For example,
off-diagonal entries of task covariance ? characterize the task similarity; diagonal entries indicate
different amounts of regularization on tasks, which may be fixed as a constant if we prefer tasks to be
equally regularized. Similar arguments apply to feature covariance ?. We include these restrictions
by converting inferred covariance(s) into correlation(s) in step 2) of the algorithm in Section 4.3. In
other words, the restrictions are enforced by a projection step.
If one wants to iterative over steps 2) and 3) of the algorithm in Section 4.3 until convergence, we
may consider the constraints
?ii = c1
i = 1, 2, . . . , m
(15)
?jj = c2
j = 1, 2, . . . , p
(16)
with unknown quantities c1 and c2 , and consider eq. (9) in step 2) as a constrained optimization
problem w.r.t. W, ?, ?, c1 and c2 , instead of using a projection step. As a result, the ?flip-flop?
algorithm in (11) needs to solve ?1 penalized covariance selection with equality constraints (15)
or (16), where the dual block coordinate descent [5] and graphical lasso [17] are no longer directly
applicable. In this case, one can solve the two steps of (11) as determinant maximization problems
with linear constraints [25], but this is inefficient. We will study this direction (efficient constrained
sparse covariance selection) in the future work.
5 Empirical Studies
In this section, we present our empirical studies on a landmine detection problem and a face recognition problem, where multiple tasks correspond to detecting landmines at different landmine fields
and classifying faces between different subjects, respectively.
1
Further iterations over step 2) and 3) will not dramatically change model estimation. Also, early stopping
as regularization might also lead to better generalizability.
5
5.1 Data Sets and Experimental Settings
The landmine detection data set from [26] contains examples collected from different landmine
fields. Each example in the data set is represented by a 9-dimensional feature vector extracted from
radar imaging, which includes moment-based features, correlation-based features, an energy ratio
feature and a spatial variance feature. As a binary classification problem, the goal is to predict
landmines (positive class) or clutter (negative class). Following [26], we jointly learn 19 tasks from
landmine fields 1 ? 10 and 19 ? 24 in the data set. As a result, the model parameters W are a 19 ? 10
matrix, corresponding to 19 tasks and 10 coefficients (including the intercept) for each task.
The distribution of examples is imbalanced in each task, with a few dozen positive examples and
several hundred negative examples. Therefore, we use the average AUC (Area Under the ROC
Curve) over 19 tasks as the performance measure. We vary the size of the training set for each task
as 30, 40, 80 and 160. Note that we intentionally keep the training sets small because the need for
cross-task learning diminishes as the training set becomes large relative to the number of parameters
being learned. For each training set size, we randomly select training examples for each task and
the rest is used as the testing set. This is repeated 30 times. Task-average AUC scores are collected
over 30 runs, and mean and standard errors are reported. Note that for small training sizes (e.g., 30
per task) we often have some task(s) that do not have any positive training sample. It is interesting
to see how well multi-task learning handles this case.
The face recognition data set is the Yale face database, which contains 165 images of 15 subjects.
The 11 images per subject correspond to different configurations in terms of expression, emotion,
illumination, and wearing glasses (or not), etc. Each image is scaled to 32 ? 32 pixels. We use the
first 8 subjects to construct 8?7
2 = 28 binary classification tasks, each to classify two subjects. We
vary the size of the training set as 3, 5 and 7 images per subject. We have 30 random runs for each
training size. In each run, we randomly select the training set and use the rest as the testing set. We
collect task-average classification errors over 30 runs, and report mean and standard errors.
Choice of features is important for face recognition problems. In our experiments, we use orthogonal
Laplacianfaces [10], which have been shown to provide better discriminative power than Eigenfaces
(PCA), fisherfaces (LDA) and Laplacianfaces on several benchmark data sets. In each random run,
we extract 30 orthogonal Laplacianfaces using the selected training set of all 8 subjects2 , and conduct
experiments of all 28 classification tasks in the extracted feature space.
5.2 Models and Implementation Details
We use the logistic regression loss as the empirical loss L in (9). We compare the following models.
STL: learn ?2 regularized logistic regression for each task separately.
MTL-C: clustered multi-task learning [21], which encourages task clustering in regularization. As
discussed in Section 4.1, this is related to eq. (5) with only a task structure ?.
MTL-F: multi-task feature learning [2], which corresponds to fixing the task covariance ? as Im
and optimizing (6) with only the feature covariance ?.
In addition, we also study various different configurations of the proposed framework:
MTL(Im &Ip ): learn W using (9) with ? and ? fixed as identity matrices Im and Ip .
MTL(?&Ip ): learn W and task covariance ? using (9), with feature covariance ? fixed as Ip .
MTL(Im &?): learn W and feature covariance ? using (9), with task covariance ? fixed as Im .
MTL(?&?): learn W, ? and ? using (9), inferring both task and feature structures.
MTL(?&?)?ii =?jj =1 : learn W, ? and ? using (9), with restricted ? and ? as (13) and (14).
MTL(?&?)?ii =1 : learn W, ? and ? using (9), with restricted ? as (13) and free ?. Intuitively,
free diagonal entries in ? are useful when features are of different importance, e.g, components
extracted as orthogonal Laplacianfaces usually capture decreasing amounts of information [10].
We use conjugate gradients [22] to optimize W in (5), and infer ? and ? in (11) using graphical
lasso [17] as the basic solver. Regularization parameters ? and ?? = ?? are chosen by 3-fold cross
2
For experiments with 3 images per subject, we can only extract 23 Laplacianfaces, which is limited by the
size of training examples (3 ? 8 = 24) [10].
6
Avg AUC Score
STL
MTL-C [21]
MTL-F [2]
MTL(Im &Ip )
MTL(?&Ip )
MTL(Im &?)
MTL(?&?)
MTL(?&?)?ii=?jj =1
MTL(?&?)?ii =1
30 samples
64.85(0.52)
67.09(0.44)
72.39(0.79)
66.10(0.65)
74.88(0.29)
72.71(0.65)
75.10(0.27)
75.31(0.26)?
75.19(0.22)
40 samples
67.62(0.64)
68.95(0.40)
74.75(0.63)
69.91(0.40)
75.83(0.28)
74.98(0.32)
76.16(0.15)
76.64(0.13)?
76.25(0.14)
80 samples
71.86(0.38)
72.89(0.31)
77.12(0.18)
73.34(0.28)
76.93(0.15)
77.35(0.14)
77.32(0.24)
77.56(0.16)?
77.22(0.15)
160 samples
76.22(0.25)
76.64(0.17)
78.13(0.12)
76.17(0.22)
77.95(0.17)
78.13(0.14)
78.21(0.17)?
78.01(0.12)
78.03(0.15)
Table 1: Average AUC scores (%) on landmine detection: means (and standard errors) over 30
random runs. For each column, the best model is marked with ? and competitive models (by paired
t-tests) are shown in bold.
validation within the range [10?7 , 103 ]. The model in [21] uses 4 regularization parameters, and we
consider 3 values for each parameter, leading to 34 = 64 combinations chosen by cross validation.
5.3 Results on Landmine Detection
The results on landmine detection are shown in Table 1. Each row of the table corresponds to a
model in our experiments. Each column is a training sample size. We have 30 random runs for each
sample size. We use task-average AUC score as the performance measure and report the mean and
standard error of this measure over 30 random runs. The best model is marked with ? , and models
displayed in bold fonts are statistically competitive models (i.e. not significantly inferior to the best
model in a one-sided paired t-test with ? = 0.05).
Overall speaking, MTL(?&?) and MTL(?&?)?ii =?jj =1 lead to the best prediction performance.
For small training sizes, restricted ? and ? (?ii = ?jj = 1) offer better prediction; for large
training size (160 per task), free ? and ? give the best performance. The best model performs
better than MTL-F [2] and much better than MTL-C [21] with small training sets.
MTL(Im &Ip ) performs better than STL, i.e., even the simplest coupling among tasks (by sharing ?)
can be helpful when the size of training data is small. Consider the performance of MTL(?&Ip ) and
MTL(Im &?), which learn either a task structure or a feature structure. When the size of training
samples is small (i.e., 30 or 40), coupling by task similarity is more effective, and as the training size
increases, learning a common feature representation is more helpful. Finally, consider MTL(?&?),
MTL(?&?)?ii =?jj =1 and MTL(?&?)?ii =1 . MTL(?&?)?ii =?jj =1 imposes a strong restriction
and leads to better performance when the training size is small. MTL(?&?) is more flexible and
performs well given large numbers of training samples. MTL(?&?)?ii =1 performs similarly to
MTL(?&?)?ii =?jj =1 , indicating no significant variation of feature importance in this problem.
5.4 Results on Face Recognition
Empirical results on face recognition are shown in Table 2, with the best model in each column
marked with ? and competitive models displayed in bold. MTL-C [21] performs even worse than
STL. One possible explanation is that, since tasks are to classify faces between different subjects,
there may not be a clustered structure over tasks and thus a cluster norm will be inappropriate.
In this case, using a task similarity matrix may be more appropriate than clustering over tasks.
In addition, MTL(?&?)?ii =1 shows advantages over other models, especially if given relatively
sufficient training data (5 or 7 per subject). Compared to MTL(?&?), MTL(?&?)?ii =1 imposes
restrictions on diagonal entries of task covariance ?: all tasks seem to be similarly difficult and
should be equally regularized. Compared to MTL(?&?)?ii =?jj =1 , MTL(?&?)?ii =1 allows the
diagonal entries of feature covariance ? to capture varying degrees of importance of Laplacianfaces.
7
Avg Classification Errors
STL
MTL-C [21]
MTL-F [2]
MTL(Im &Ip )
MTL(?&Ip )
MTL(Im &?)
MTL(?&?)
MTL(?&?)?ii =?jj =1
MTL(?&?)?ii =1
3 samples per class
10.97(0.46)
11.09(0.49)
10.78(0.60)
10.88(0.48)
9.98(0.55)
9.87(0.59)
9.81(0.49)
9.67(0.57)?
9.67(0.51)?
5 samples per class
7.62(0.30)
7.87(0.34)
6.86(0.27)
7.51(0.28)
6.68(0.30)
6.25(0.27)
6.23(0.29)
6.21(0.28)
5.98(0.29)?
7 samples per class
4.75(0.35)
5.33(0.34)
4.20(0.31)
5.00(0.35)
4.12(0.38)
4.06(0.34)
4.11(0.36)
4.02(0.32)
3.53(0.34)?
Table 2: Average classification errors (%) on face recognition: means (and standard errors) over 30
random runs. For each column, the best model is marked with ? and competitive models (by paired
t-tests) are shown in bold.
6 Conclusion
We propose a matrix-variate normal penalty with sparse inverse covariances to couple multiple tasks.
The proposed framework provides an effective and flexible way to characterize and select both task
and feature structures for learning multiple tasks. Several recently proposed methods can be viewed
as variants of the special cases of our formulation and our empirical results on landmine detection
and face recognition show that we consistently outperform previous methods.
Acknowledgement: this work was funded in part by the National Science Foundation under grant
NSF-IIS0911032 and the Department of Energy under grant DESC0002607.
Appendix A
Proof of Lemma 1.
We prove lemma 1 by construction. Given an arbitrary choice of ?, ?? and ?? > 0 in eq. (9) and an
optimal solution (W? , ?? , ?? ), we want to prove that W? also belongs to an optimal solution for
eq. (9) with certain ?? , ??? and ??? s.t. ??? = ??? . Let?s construct ?? , ??? and ??? as follows:
p
p
(17)
(?? , ??? , ??? ) = (?, ?? ?? , ?? ?? )
We denote the objective function in eq. (9) with ?, ?? and ?? as Obj ?,?? ,?? (W, ?, ?).
Also, we denote the objective function with our constructed parameters ?? , ??? and ??? as
?
?
?
Obj ? ,?? ,?? (W, ?, ?).
For any (W, ?, ?), we further construct an invertible (i.e., one-to-one) transform as follows:
r
r
??
??
?
?
?
(W , ? , ? ) = (W,
?,
?)
(18)
??
??
The key step in our proof is that, by construction, the following equality always holds:
?
?
?
Obj ?,?? ,?? (W, ?, ?) = Obj ? ,?? ,?? (W? , ?? , ?? )
(19)
To see this, notice that eq. (9) consists of three parts. The first part is the empirical loss on training
examples, depending only on W (and training data). The second part is the log-density of matrix
normal distributions, which depends on W and ? ? ?. The third part is the sum of two ?1 penalties.
The equality in eq. (19) stems from the fact that all three parts of eq. (9) are not changed: 1) W? =
W so the first part remains unchanged; 2) ?? ? ?? = ? ? ? so the second part of the matrix normal
log-density is the same; 3) by our construction, the third part is not changed.
thisq equality, if (W? , ?? , ?? ) minimizes Obj ?,?? ,?? (), we have that
?
?? ?
?? ?
?? ,??? ,???
(), where ?? = ? and ??? = ??? = ?? ?? .
?? ? ,
?? ? ) minimizes Obj
Based qon
(W? ,
8
References
[1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[2] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS, 2006.
[3] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task
structure learning. In NIPS, 2007.
[4] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. Journal of Machine
Learning Research, 4:83?99, 2003.
[5] O. Banerjee, L. E. Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation for multivariate gaussian or binary data. J. Mach. Learn. Res., 9:485?516, 2008.
[6] J. Baxter. Learning Internal Representations. In COLT, pages 311?320, 1995.
[7] E. Bonilla, K. M. Chai, and C. Williams. Multi-task gaussian process prediction. In J. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, NIPS, pages 153?160. 2008.
[8] E. V. Bonilla, F. V. Agakov, and C. K. I. Williams. Kernel multi-task learning using task-specific features.
In AISTATS, 2007.
[9] P. J. Brown and M. Vannucci. Multivariate Bayesian Variable Selection and Prediction. Journal of the
Royal Statistical Soceity, Series B, 60(3):627?641, 1998.
[10] D. Cai, X. He, J. Han, and H. Zhang. Orthogonal laplacianfaces for face recognition. IEEE Transactions
on Image Processing, 15(11):3608?3614, 2006.
[11] R. Caruana. Multitask Learning. Machine Learning, 28:41?75, 1997.
[12] J. Chen, L. Tang, J. Liu, and J. Ye. A Convex Formulation for Learning Shared Structures from Multiple
Tasks. In ICML, 2009.
[13] A. P. Dawid. Some matrix-variate distribution theory: Notational considerations and a bayesian application. Biometrika, 68(1):265?274, 1981.
[14] A. P. Dempster. Covariance selection. Biometrics, 1972.
[15] J. Duchi, S. Gould, and D. Koller. Projected subgradient methods for learning sparse gaussians. In
Proceedings of the Twenty-fourth Conference on Uncertainty in AI (UAI), 2008.
[16] P. Dutilleul. The MLE Algorithm for the Matrix Normal Distribution. J. Statist. Comput. Simul., 64:105?
123, 1999.
[17] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 2007.
[18] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman Hall, 1999.
[19] B. Hariharan, S. Vishwanathan, and M. Varma. Large scale max-margin multi-label classification with
priors. In ICML, 2010.
[20] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference,
and Prediction. Springer, 2001.
[21] L. Jacob, F. Bach, and J. P. Vert. Clustered multi-task learning: A convex formulation. In NIPS, pages
745?752, 2008.
[22] J. Nocedal and S. Wright. Numerical Optimization. Springer, 2000.
[23] G. Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for
multiple classification problems. Statistics and Computing, 2009.
[24] S. Thrun and J. O?Sullivan. Discovering Structure in Multiple Learning Tasks: The TC Algorithm. In
ICML, pages 489?497, 1996.
[25] L. Vandenberghe, S. Boyd, and S.-P. Wu. Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19:499?533, 1996.
[26] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with dirichlet
process priors. Journal of Machine Learning Research, 8:35?63, 2007.
[27] K. Yu, W. Chu, S. Yu, V. Tresp, and Z. Xu. Stochastic relational models for discriminative link prediction.
In NIPS, pages 1553?1560, 2007.
[28] K. Yu, J. Lafferty, S. Zhu, and Y. Gong. Large-scale collaborative prediction using a nonparametric
random effects model. In ICML, pages 1185?1192, 2009.
[29] S. Yu, V. Tresp, and K. Yu. Robust multi-task learning with t-processes. In ICML, page 1103, 2007.
[30] J. Zhang, Z. Ghahramani, and Y. Yang. Learning multiple related tasks using latent independent component analysis. In NIPS, pages 1585?1592, 2006.
9
| 4095 |@word multitask:2 determinant:3 norm:3 annoying:1 covariance:57 decomposition:1 jacob:1 tr:10 moment:1 configuration:2 contains:3 score:4 series:1 liu:1 nt:5 chu:1 must:1 numerical:2 enables:1 laplacianfaces:7 discovering:2 prohibitive:1 selected:1 ith:1 detecting:3 provides:5 node:1 preference:1 zhang:4 along:1 c2:3 direct:1 constructed:1 prove:2 consists:1 multi:19 decomposed:3 decreasing:1 encouraging:1 inappropriate:1 solver:2 considering:1 becomes:2 provided:1 estimating:3 discover:1 biostatistics:1 argmin:2 interpreted:1 pursue:1 minimizes:2 bakker:1 finding:1 safely:2 biometrika:1 scaled:1 platt:1 control:3 grant:2 positive:7 local:1 mach:1 might:1 studied:4 collect:1 limited:1 range:1 statistically:1 unique:1 testing:2 block:2 definite:3 sullivan:1 pontil:2 area:2 empirical:8 significantly:1 vert:1 projection:2 boyd:1 word:1 krishnapuram:1 unlabeled:1 selection:17 intercept:1 optimize:4 equivalent:1 map:1 restriction:5 maximizing:2 williams:2 attention:1 independently:1 convex:5 identifying:1 estimator:1 regularize:1 varma:1 vandenberghe:1 stability:1 handle:1 coordinate:1 variation:1 construction:3 suppose:1 gps:1 us:2 element:6 dawid:1 recognition:8 agakov:1 database:1 observed:1 taskar:1 capture:2 principled:1 dempster:1 complexity:2 radar:1 rewrite:1 solving:3 predictive:2 serve:1 upon:1 joint:3 various:2 represented:2 effective:3 choosing:1 qon:1 widely:1 solve:4 loglikelihood:1 statistic:2 gp:1 jointly:2 transform:1 ip:14 advantage:1 cai:1 propose:4 interaction:1 product:7 relevant:1 roweis:1 forth:1 chai:1 convergence:5 cluster:1 coupling:3 depending:2 gong:1 fixing:1 eq:26 strong:1 c:2 indicate:1 direction:5 stochastic:1 fix:3 clustered:4 im:15 exploring:1 hold:1 considered:1 hall:1 normal:34 wright:1 predict:1 vary:2 early:1 purpose:1 estimation:7 diminishes:1 applicable:1 label:3 gaussian:5 always:2 aim:2 pn:1 avoid:2 varying:1 derived:1 notational:1 consistently:1 likelihood:4 indicates:1 glass:1 helpful:2 inference:2 stopping:1 eliminate:1 hidden:1 relation:1 koller:2 expand:1 pixel:1 issue:1 among:4 flexible:3 dual:1 overall:1 classification:10 colt:1 art:1 special:6 constrained:2 spatial:1 field:5 equal:1 emotion:1 construct:3 evgeniou:1 sampling:1 chapman:1 represents:1 yu:5 icml:5 carin:1 future:2 np:1 report:2 few:2 randomly:2 national:1 ando:1 friedman:2 detection:6 interest:1 mining:1 mixture:1 orthogonal:4 biometrics:1 conduct:1 re:1 column:13 modeling:3 classify:2 caruana:1 maximization:2 ordinary:1 subset:1 entry:9 hundred:1 recognizing:2 characterize:3 reported:1 generalizability:1 xue:1 density:11 siam:1 systematic:1 off:1 invertible:1 squared:1 nm:1 containing:1 worse:1 inefficient:1 leading:1 bold:4 includes:1 coefficient:1 matter:1 mp:8 explicitly:1 depends:1 bonilla:2 performed:1 view:1 multiplicative:1 linked:1 characterizes:1 start:1 competitive:4 contribution:1 collaborative:2 square:2 hariharan:1 ni:1 variance:2 efficiently:1 correspond:7 yield:2 landmine:9 bayesian:7 researcher:2 sharing:2 definition:2 energy:2 intentionally:1 proof:3 couple:4 intensively:1 knowledge:2 dt:2 follow:1 mtl:44 specify:1 formulation:5 until:4 correlation:4 hand:1 banerjee:1 logistic:3 lda:1 indicated:1 effect:1 ye:1 brown:1 regularization:21 equality:4 iteratively:2 nonzero:1 uniquely:1 auc:5 encourages:1 inferior:1 outline:1 performs:5 duchi:1 image:6 wise:1 consideration:1 recently:4 began:1 common:4 empirically:1 discussed:2 interpretation:1 belong:1 he:1 mellon:2 significant:1 imposing:1 ai:1 heskes:1 similarly:3 schneide:1 funded:1 han:1 similarity:6 longer:1 etc:1 multivariate:7 posterior:2 recent:2 imbalanced:1 perspective:1 optimizing:1 optimizes:3 belongs:2 nagar:1 certain:2 inequality:1 binary:3 yi:6 results1:1 seen:1 minimum:1 additional:6 impose:1 schneider:1 converting:1 ii:19 multiple:23 full:2 desirable:1 infer:6 stem:1 cross:3 offer:1 bach:1 equally:3 mle:7 paired:3 laplacian:1 prediction:8 variant:3 regression:5 basic:2 liao:1 essentially:1 cmu:2 expectation:2 iteration:1 kernel:1 robotics:1 c1:3 addition:2 want:3 separately:1 singular:1 appropriately:1 rest:2 subject:11 lafferty:1 seem:1 obj:6 jordan:1 presence:1 yang:1 baxter:1 iterate:1 affect:1 variate:13 independence:1 hastie:2 lasso:4 restrict:1 reduce:2 motivated:1 expression:1 pca:1 effort:1 penalty:29 speaking:1 jj:11 dramatically:1 useful:2 amount:2 nonparametric:2 clutter:1 statist:1 svms:1 tth:2 simplest:1 outperform:1 nsf:1 notice:1 estimated:1 per:9 tibshirani:2 carnegie:2 affected:1 group:1 key:2 drawn:1 utilize:2 nocedal:1 imaging:1 subgradient:1 sum:1 enforced:1 run:9 inverse:13 parameterized:1 fourth:1 uncertainty:1 almost:1 wu:1 appendix:2 prefer:1 guaranteed:1 yale:1 fold:1 identifiable:4 strength:2 kronecker:7 constraint:9 infinity:2 vishwanathan:1 argument:1 min:2 performing:1 relatively:1 gould:1 department:2 structured:1 combination:1 conjugate:1 describes:1 wi:7 dutilleul:1 outlier:1 restricted:3 intuitively:1 ghaoui:1 vannucci:1 sided:1 equation:1 previously:1 remains:1 discus:2 singer:1 know:1 flip:4 gaussians:2 operation:1 apply:1 appropriate:1 spectral:1 dirichlet:2 include:5 clustering:6 graphical:4 hinge:2 ghahramani:1 especially:2 prof:1 classical:1 unchanged:1 micchelli:1 objective:2 quantity:1 font:1 parametric:2 diagonal:5 gradient:1 subspace:1 link:1 thrun:1 vectorize:1 collected:2 assuming:2 ratio:1 ying:1 difficult:1 trace:2 negative:2 design:3 implementation:1 unknown:1 perform:1 fisherfaces:1 twenty:1 observation:1 benchmark:1 descent:1 displayed:2 flop:4 relational:1 arbitrary:2 inferred:2 pair:1 connection:1 optimized:1 learned:2 established:1 nip:6 address:2 usually:3 sparsity:3 max:2 including:1 explanation:1 royal:1 power:1 natural:3 regularized:7 zhu:1 improve:1 risky:1 aspremont:1 coupled:2 extract:2 tresp:2 prior:9 literature:1 acknowledgement:1 relative:1 loss:11 glasso:2 interesting:1 filtering:1 validation:2 foundation:1 degree:1 sufficient:1 imposes:2 editor:1 classifying:1 pi:1 share:1 row:12 penalized:1 changed:2 last:1 free:3 landmines:4 institute:1 eigenfaces:1 face:13 characterizing:2 sparse:18 curve:1 dimension:1 world:2 ignores:1 avg:2 projected:1 ec:3 transaction:1 compact:1 ignore:1 relatedness:4 keep:1 overfitting:2 active:1 uai:1 xi:5 discriminative:2 vectorization:1 iterative:1 decade:3 decomposes:2 latent:1 table:5 promising:1 learn:11 robust:2 aistats:1 repeated:1 xu:1 roc:1 inferring:4 comput:1 tied:1 third:2 tang:1 dozen:1 formula:3 specific:3 covariate:1 gating:1 simul:1 gupta:1 stl:5 restricting:1 importance:3 illumination:1 margin:2 chen:1 tc:1 expressed:1 springer:2 corresponds:4 minimizer:2 extracted:3 obozinski:1 conditional:1 viewed:3 goal:1 identity:1 marked:4 jeff:1 shared:7 considerable:1 change:1 specifically:1 wt:11 principal:1 lemma:5 total:2 pas:1 experimental:2 meaningful:1 indicating:4 select:9 formally:1 internal:1 incorporate:2 wearing:1 mcmc:1 argyriou:2 |
3,418 | 4,096 | The LASSO risk: asymptotic results and real world
examples
Mohsen Bayati
Stanford University
[email protected]
Jos?e Bento
Stanford University
[email protected]
Andrea Montanari
Stanford University
[email protected]
Abstract
We consider the problem of learning a coefficient vector x0 ? RN from noisy
linear observation y = Ax0 + w ? Rn . In many contexts (ranging from model
selection to image processing) it is desirable to construct a sparse estimator x
b.
In this case, a popular approach consists in solving an ?1 -penalized least squares
problem known as the LASSO or Basis Pursuit DeNoising (BPDN).
For sequences of matrices A of increasing dimensions, with independent gaussian entries, we prove that the normalized risk of the LASSO converges to a limit,
and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the
LASSO for random instances. The proof technique is based on the analysis of
AMP, a recently developed efficient algorithm, that is inspired from graphical
models ideas.
Through simulations on real data matrices (gene expression data and hospital medical records) we observe that these results can be relevant in a broad array of practical applications.
1 Introduction
Let x0 ? RN be an unknown vector, and assume that a vector y ? Rn of noisy linear measurements of x0 is available. The problem of reconstructing x0 from such measurements arises in a
number of disciplines, ranging from statistical learning to signal processing. In many contexts the
measurements are modeled by
y = Ax0 + w ,
(1.1)
where A ? Rn?N is a known measurement matrix, and w is a noise vector.
The LASSO or Basis Pursuit Denoising (BPDN) is a method for reconstructing the unknown vector
x0 given y, A, and is particularly useful when one seeks sparse solutions. For given A, y, one
considers the cost functions CA,y : RN ? R defined by
1
(1.2)
CA,y (x) = ky ? Axk2 + ?kxk1 ,
2
with ? > 0. The original signal is estimated by
x
b(?; A, y) = argminx CA,y (x) .
(1.3)
In what follows we shall often omit the arguments A, y (and occasionally ?) from theP
above notam
tions. We will also use x
b(?; N ) to emphasize the N -dependence. Further kvkp ? ( i=1 vip )1/p
p
denotes the ?p -norm of a vector v ? R (the subscript p will often be omitted if p = 2).
A large and rapidly growing literature is devoted to (i) Developing fast algorithms for solving the
optimization problem (1.3); (ii) Characterizing the performances and optimality of the estimator x
b.
We refer to Section 1.3 for an unavoidably incomplete overview.
1
Despite such substantial effort, and many remarkable achievements, our understanding of (1.3) is
not even comparable to the one we have of more classical topics in statistics and estimation theory.
For instance, the best bound on the mean square error (MSE) of the estimator (1.3), i.e. on the
quantity N ?1 kb
x ? x0 k2 , was proved by Candes, Romberg and Tao [CRT06] (who in fact did not
consider the LASSO but a related optimization problem). Their result estimates the mean square
error only up to an unknown numerical multiplicative factor. Work by Candes and Tao [CT07] on
the analogous Dantzig selector, upper bounds the mean square error up to a factor C log N , under
somewhat different assumptions.
The objective of this paper is to complement this type of ?rough but robust? bounds by proving
asymptotically exact expressions for the mean square error. Our asymptotic result holds almost
surely for sequences of random matrices A with fixed aspect ratio and independent gaussian entries.
While this setting is admittedly specific, the careful study of such matrix ensembles has a long
tradition both in statistics and communications theory and has spurred many insights [Joh06, Tel99].
Further, our main result provides asymptotically exact expressions for other operating characteristics
of the LASSO as well (e.g., False Positive Rate and True positive Rate). We carried out simulations
on real data matrices with continuous entries (gene expression data) and binary feature matrices
(hospital medical records). The results appear to be quite encouraging.
Although our rigorous results are asymptotic in the problem dimensions, numerical simulations have
shown that they are accurate already on problems with a few hundreds of variables. Further, they
seem to enjoy a remarkable universality property and to hold for a fairly broad family of matrices
[DMM10]. Both these phenomena are analogous to ones in random matrix theory, where delicate
asymptotic properties of gaussian ensembles were subsequently proved to hold for much broader
classes of random matrices. Also, asymptotic statements in random matrix theory have been replaced over time by concrete probability bounds in finite dimensions. Of course the optimization
problem (1.2) is not immediately related to spectral properties of the random matrix A. As a consequence, universality and non-asymptotic results in random matrix theory cannot be directly exported
to the present problem. Nevertheless, we expect such developments to be foreseeable.
Our proof is based on the analysis of an efficient iterative algorithm first proposed by [DMM09],
and called AMP, for approximate message passing. The algorithm is inspired by belief-propagation
on graphical models, although the resulting iteration is significantly simpler (and scales linearly
in the number of nodes). Extensive simulations [DMM10] showed that, in a number of settings,
AMP performances are statistically indistinguishable to the ones of LASSO, while its complexity is
essentially as low as the one of the simplest greedy algorithms.
The proof technique just described is new. Earlier literature analyzes the convex optimization problem (1.3) ?or similar problems? by a clever construction of an approximate optimum, or of a dual
witness. Such constructions are largely explicit. Here instead we prove an asymptotically exact
characterization of a rather non-trivial iterative algorithm. The algorithm is then proved to converge
to the exact optimum. Due to limited space in this paper we only state the main steps of the proof.
More details are available in [BM10b]
1.1 Definitions
In order to define the AMP algorithm, we denote by ? : R ? R+ ? R the soft thresholding function
(
x ? ? if x > ?,
0
if ?? ? x ? ?,
?(x; ?) =
(1.4)
x + ? otherwise.
The algorithm constructs a sequence of estimates xt ? RN , and residuals z t ? Rn , according to the
iteration
xt+1 = ?(A? z t + xt ; ?t ),
(1.5)
kxt k0 t?1
z
,
n
initialized with x0 = 0. Here A? denotes the transpose of matrix A, and kxt k0 is number of nonzero entries of xt . Given a scalar function f and a vector u ? Rm , we let f (u) denoteP
the vector
m
(f (u1 ), . . . , f (um )) ? Rm obtained by applying f componentwise. Finally hui ? m?1 i=1 ui is
m
the average of the vector u ? R .
z t = y ? Axt +
2
As already mentioned, we will consider sequences of instances of increasing sizes, along which the
LASSO behavior has a non-trivial limit.
Definition 1. The sequence of instances {x0 (N ), w(N ), A(N )}N ?N indexed by N is said to be a
converging sequence if x0 (N ) ? RN , w(N ) ? Rn , A(N ) ? Rn?N with n = n(N ) such that
n/N ? ? ? (0, ?), and in addition the following conditions hold:
(a) The empirical1 distribution of the entries of x0 (N ) converges weakly to a probability measure
P
2
2
pX0 on R with bounded second moment. Further N ?1 N
i=1 x0,i (N ) ? EpX0 {X0 }.
(b) The empirical distribution of the entries of w(N
P) converges weakly to a probability measure pW
on R with bounded second moment. Further n?1 ni=1 wi (N )2 ? EpW {W 2 }.
(c) If {ei }1?i?N , ei ? RN denotes the standard basis, then maxi?[N ] kA(N )ei k2 ,
mini?[N ] kA(N )ei k2 ? 1, as N ? ? where [N ] ? {1, 2, . . . , N }.
For a converging sequence of instances, and an arbitrary sequence of thresholds {?t }t?0 (independent of N ), the asymptotic behavior of the recursion (1.5) can be characterized as follows.
Define the sequence {?t2 }t?0 by setting ?02 = ? 2 + E{X02 }/? (for X0 ? pX0 and ? 2 ? E{W 2 },
2
W ? pW ) and letting, for all t ? 0: ?t+1
= F(?t2 , ?t ) with
1
E{ [?(X0 + ? Z; ?) ? X0 ]2 } ,
?
where Z ? N(0, 1) is independent of X0 . Notice that the function F depends on the law pX0 .
F(? 2 , ?) ? ? 2 +
We say a function ? : R2 ? R is pseudo-Lipschitz if there exist a constant L > 0 such that for all
x, y ? R2 : |?(x) ? ?(y)| ? L(1 + kxk2 + kyk2 )kx ? yk2 . (This is a special case of the definition
used in [BM10a] where such a function is called pseudo-Lipschitz of order 2.)
Our next proposition that was conjectured in [DMM09] and proved in [BM10a]. It shows that the
behavior of AMP can be tracked by the above one dimensional recursion. We often refer to this
prediction by state evolution.
Theorem 1 ([BM10a]). Let {x0 (N ), w(N ), A(N )}N ?N be a converging sequence of instances with
the entries of A(N ) iid normal with mean 0 and variance 1/n and let ? : R ? R ? R be a pseudoLipschitz function. Then, almost surely
N
n
o
1 X
? xt+1
,
x
=
E
?
?(X
+
?
Z;
?
),
X
,
0,i
0
t
t
0
i
N ?? N
i=1
lim
(1.6)
where Z ? N(0, 1) is independent of X0 ? pX0 .
In order to establish the connection with the LASSO, a specific policy has to be chosen for the
thresholds {?t }t?0 . Throughout this paper we will take ?t = ??t with ? is fixed. In other words,
2
the sequence {?t }t?0 is given by the recursion ?t+1
= F(?t2 , ??t ). This choice enjoys several
convenient properties [DMM09].
1.2 Main result
Before stating our results, we have to describe a calibration mapping between ? and ? that was
introduced in [DMM10] (Propositions 2, 3 and Corollary 4). Their proofs are presented in [BM10b].
Let us start by stating some convenient properties of the state evolution recursion.
Proposition 2 ([DMM09]). Let ?min = ?min (?) be the unique non-negative solution of the equation
?
2
(1 + ?2R)?(??) ? ??(?) = 2? , with ?(z) ? e?z /2 / 2? the standard gaussian density and
z
?(z) ? ?? ?(x) dx.
For any ? 2 > 0, ? > ?min (?), the fixed point equation ? 2 = F(? 2 , ?? ) admits a unique solution.
Denoting by ?? = ?? (?) this solution, we have limt?? ?t = ?? (?). Further
the convergence takes
dF
2
place for any initial condition and is monotone. Finally d?
2 (? , ?? ) < 1 at ? = ?? .
1
The probability distribution that puts a point mass 1/N at each of the N entries of the vector.
3
We then define the function ? 7? ?(?) on (?min (?), ?), by ?(?) ? ??? [1 ? 1? P{|X0 + ?? Z| ?
??? }].
This function defines a correspondence (calibration) between the sequence of thresholds {?t }t?0
and the regularization parameter ?. It should be intuitively clear that larger ? corresponds to larger
thresholds and hence larger ? since both cases yield smaller estimates of x0 .
In the following we willneed to invert this function. We thus define ? : (0, ?) ? (?min , ?) in
such a way that ?(?) ? a ? (?min , ?) : ?(a) = ? .
The next result implies that the set on the right-hand side is non-empty and therefore the function
? 7? ?(?) is well defined.
Proposition 3 ([DMM10]). The function ? 7? ?(?) is continuous on the interval (?min , ?) with
?(?min +) = ?? and lim??? ?(?) = ?.
Therefore the function ? 7? ?(?) satisfying ?(?) ? a ? (?min , ?) : ?(a) = ? exists.
We will denote by A = ?((0, ?)) the image of the function ?. Notice that the definition of ? is a
priori not unique. We will see that uniqueness follows from our main theorem.
Examples of the mappings ? 2 7? F(? 2 , ?? ), ? 7? ?? (?) and ? 7? ?(?) are presented in [BM10b].
We can now state our main result.
Theorem 2. Let {x0 (N ), w(N ), A(N )}N ?N be a converging sequence of instances with the entries
of A(N ) iid normal with mean 0 and variance 1/n. Denote by x
b(?; N ) the LASSO estimator
for instance (x0 (N ), w(N ), A(N )), with ? 2 , ? > 0, P{X0 6= 0} and let ? : R ? R ? R be a
pseudo-Lipschitz function. Then, almost surely
N
n
o
1 X
? x
bi , x0,i = E ? ?(X0 + ?? Z; ?? ), X0 ,
N ?? N
i=1
lim
(1.7)
where Z ? N(0, 1) is independent of X0 ? pX0 , ?? = ?? (?(?)) and ?? = ?(?)?? (?(?)).
As a corollary, the function ? 7? ?(?) is indeed uniquely defined.
Corollary 4. For any ?, ? 2 > 0 there exists a unique ? > ?min such that ?(?) = ? (with the
function ? ? ?(?) defined by ?(?) = ??? [1 ? 1? P{|X0 + ?? Z| ? ??? }].
Hence the function ? 7? ?(?) is continuous non-decreasing with ?((0, ?)) ? A = (?0 , ?).
The assumption of a converging problem-sequence is important for the result to hold, while the
hypothesis of gaussian measurement matrices A(N ) is necessary for the proof technique to be correct. On the other hand, the restrictions ?, ? 2 > 0, and P{X0 6= 0} > 0 (whence ?? 6= 0 using
?(?) = ??? [1 ? ?1 P{|X0 + ?? Z| ? ??? }]) are made in order to avoid technical complications due
to degenerate cases. Such cases can be resolved by continuity arguments.
1.3 Related work
The LASSO was introduced in [Tib96, CD95]. Several papers provide performance guarantees for
the LASSO or similar convex optimization methods [CRT06, CT07], by proving upper bounds on
the resulting mean square error. These works assume an appropriate ?isometry? condition to hold for
A. While such condition hold with high probability for some random matrices, it is often difficult to
verify them explicitly. Further, it is only applicable to very sparse vectors x0 . These restrictions are
intrinsic to the worst-case point of view developed in [CRT06, CT07].
Guarantees have been proved for correct support recovery in [ZY06], under an appropriate ?irrepresentibility? assumption on A. While support recovery is an interesting conceptualization for some
applications (e.g. model selection), the metric considered in the present paper (mean square error)
provides complementary information and is quite standard in many different fields.
Closer to the spirit of this paper [RFG09] derived expressions for the mean square error under
the same model considered here. Similar results were presented recently in [KWT09, GBS09].
These papers argue that a sharp asymptotic characterization of the LASSO risk can provide valuable
4
guidance in practical applications. For instance, it can be used to evaluate competing optimization
methods on large scale applications, or to tune the regularization parameter ?.
Unfortunately, these results were non-rigorous and were obtained through the famously powerful
?replica method? from statistical physics [MM09].
Let us emphasize that the present paper offers two advantages over these recent developments: (i)
It is completely rigorous, thus putting on a firmer basis this line of research; (ii) It is algorithmic in
that the LASSO mean square error is shown to be equivalent to the one achieved by a low-complexity
message passing algorithm.
2 Numerical illustrations
Theorem 2 assumes that the entries of matrix A are iid gaussians. We expect however that our
predictions to be robust and hold for much larger family of matrices. Rigorous evidence in this
direction is presented in [KM10] where the normalized cost C(b
x)/N is shown to have a limit as
N ? ? which is universal with respect to random matrices A with iid entries. (More precisely, it
is universal if E{Aij } = 0, E{A2ij } = 1/n and E{A6ij } ? C/n3 for a uniform constant C.)
Further, our result is asymptotic, while and one might wonder how accurate it is for instances of
moderate dimensions.
Numerical simulations were carried out in [DMM10] and suggest that the result is robust and relevant already for N of the order of a few hundreds. As an illustration, we present in Figures 1-3
the outcome of such simulations for four types of real data and random matrices. We generated the
signal vector randomly with entries in {+1, 0, ?1} and P(x0,i = +1) = P(x0,i = ?1) = 0.05. The
noise vector w was generated by using i.i.d. N(0, 0.2) entries.
We obtained the optimum estimator x
b using OWLQN and l1 ls, packages for solving large-scale
l1-regularized regressions [KKL+ 07], [AJ07]. We used 40 values of ? between .05 and 2 and N
equal to 500, 1000, and 2000. For each case, the point (?, MSE) was plotted and the results are
shown in the figures. Continuous lines corresponds to the asymptotic prediction by Theorem 2 for
2
?(a, b) = (a ? b)2 , namely MSE = limN ?? N ?1 kb
x ? x0 k2 = E ?(X0 + ?? Z; ?? ) ? X0
=
?(??2 ? ? 2 ).
The agreement is remarkably good already for N, n of the order of a few hundreds, and deviations
are consistent with statistical fluctuations.
The four figures correspond to measurement matrices A:
Figure 1(a): Data consist of 2253 measurements of expression level of 7077 genes (this data is
provided to us by Broad Institute). From this matrix we took sub-matrices A of aspect ratio ? for
each N . The entries were continuous variables. We standardized all columns of A to have mean 0
and variance 1.
Figure 1(b): From a data set of 1932 patient records we extracted 4833 binary features describing
demographic information, medical history, lab results, medications etc. The 0-1 matrix was sparse
(with only 3.1% non-zero entries). Similar to genes data, for each N , the sub-matrices A with aspect
ratio ? were selected and standardized.
?
Figure 2(a):
? Random ?1 matrices with aspect ratio ?. Each entry is independently equal to +1/ n
or ?1/ n with equal probability.
Figure 2(b): Random gaussian matrices with aspect ratio ? and iid N(0, 1/n) entries (as in Theorem
2).
Notice the behavior appears to be essentially indistinguishable. Also the asymptotic prediction has
a minimum as a function of ?. The location of this minimum can be used to select the regularization
parameter. Further empirical analysis is presented in [BBM10].
For the second data set ?patient records? we repeated the simulation 20 times (each time with fresh
x0 and w) and obtained the average and standard error for MSE, False Positive Rate (FPR) and True
Positive Rate (TPR). The results with error bars are shown in Figure 3. The length of each error bar
5
(a) Gene expression data
(b) Hospital records
0.4
0.4
N=500
N=1000
N=2000
Prediction
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0
0.5
1
?
1.5
2
N=500
N=1000
N=2000
Prediction
0.35
MSE
MSE
0.35
0.05
0
2.5
0.5
1
?
1.5
2
2.5
Figure 1: Mean square error (MSE) as a function of the regularization parameter ? compared to the
asymptotic prediction for ? = .5 and ? 2 = .2. In plot (a) the measurement matrix A is a real valued
(standardized) matrix of gene expression data and in plot (b) A is a (standardized) 0-1 feature matrix
of hospital records. Each point in these plots is generated by finding the LASSO predictor x
b using
a measurement vector y = Ax0 + w for an independent signal vector x0 and an independent noise
vector w.
(a) ?1 matrices
(b) Gaussian matrices
0.4
0.4
N=500
N=1000
N=2000
Prediction
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0
0.5
1
?
1.5
2
N=500
N=1000
N=2000
Prediction
0.35
MSE
MSE
0.35
0.05
0
2.5
0.5
1
?
1.5
2
2.5
?
Figure 2: As in Figure 1, but the measurement matrix A has iid entries that are equal to ?1/ n
with equal probabilities in plot (a), and has iid N(0, 1/n) entries in plot (b). Additionally, each point
in these plots uses an independent matrix A.
is equal to twice the standard error (in each direction). FPR and TPR are calculated using
PN
PN
I{?xi 6=0} I{xi,0 6=0}
xi 6=0} I{xi,0 =0}
i=1 I{?
FPR ?
, TPR ? i=1
,
PN
PN
i=1 I{xi,0 =0}
i=1 I{xi,0 6=0}
(2.1)
where I{S} = 1 if statement S holds and I{S} = 0 otherwise. The predictions for FPR and TPR
are obtained by applying Theorem 2 to ?f pr (a, b) ? I{a6=0} I{b=0} and ?tpr (a, b) = I{a6=0} I{b6=0} ,
which yields
1
1
lim FPR = 2?(??),
lim TPR = ?(?? + ) + ?(?? ? )
(2.2)
N ??
N ??
??
??
where ? is defined in Proposition 2. Note that functions ?f pr (a, b) and ?tpr (a, b) are not pseudoLipschitz but the limits (2.2) follow from Theorem 2 via standard weak-convergence arguments.
3 A structural property and proof of the main theorem
We will prove the following theorem which implies our main result, Theorem 2.
Theorem 3. Assume the hypotheses of Theorem 2. Denote by {xt (N )}t?0 the sequence of estimates
produced by AMP. Then limt?? limN ?? N ?1 kxt (N ) ? x
b(?; N )k22 = 0, almost surely.
6
0.4
0.5
N500
N1000
N2000
Prediction
0.35
0.4
False Positive Rate
0.3
MSE
N=500
N=1000
N=2000
Prediction
0.45
0.25
0.2
0.35
0.3
0.25
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
0.5
1
?
1.5
2
0
0
2.5
0.5
1
?
1.5
2
2.5
0.8
N=500
N=1000
N=2000
Prediction
0.7
True Positive Rate
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.5
1
?
1.5
2
2.5
Figure 3: Average of MSE, FPR and TPR versus ? for medical data, using 20 samples per ? and N .
All parameters are similar to Figure 1(b). Error bars are twice the standard errors (in each direction).
The rest of the paper is devoted to the proof of this theorem. Section 3.1 proves a structural property
that is the key tool in this proof. Section 3.2 uses this property together with a few lemmas to prove
Theorem 3. Proofs of lemmas and more details can be found in [BM10b].
The proof of Theorem 2 follows immediately. Since when ? is Lipschitz there is a constant B where
PN
PN
| N1 i=1 ?(xt+1
, x0,i ) ? N1
xi , x0,i )| ? Bkxt+1 ? x
bk2 . We then obtain
i
i=1 ?(b
N
N
1 X
1 X
?(b
xi , x0,i ) = lim lim
? xt+1
, x0,i = E{?(?(X0 + ?? Z; ?? ), X0 )} ,
i
t?? N ?? N
N ?? N
i=1
i=1
lim
where we used Theorem 1 and Proposition 2. The case of pseudo-Lipschitz ? is a straightforward
generalization.
Some notations. For any non-empty subset S of [m] and any k ? m matrix M we refer by MS to
the k by |S| sub-matrix of M thatPcontains only the columns of M corresponding to S. Also define
m
1
m
the scalar product hu, vi ? m
i=1 ui vi for u, v ? R . Finally, the subgradient of a convex
function f : Rm ? R at point x ? Rm is denoted by ?f (x). In particular, remember that the
subgradient of the ?1 norm, x 7? kxk1 is given by
?kxk1 = v ? Rm such that |vi | ? 1 ?i and xi 6= 0 ? vi = sign(xi ) .
(3.1)
3.1 A structural property of the LASSO cost function
One main challenge in the proof of Theorem 2 lies in the fact that the function x 7? CA,y (x) is not
?in general? strictly convex. Hence there can be, in principle, vectors x of cost very close to the
optimum and nevertheless far from the optimum. The following Lemma provides conditions under
which this does not happen.
Lemma 1. There exists a function ?(?, c1 , . . . , c5 ) such that the following happens. If x, r ? RN
satisfy the following conditions:
7
?
(1) krk2 ? c1 N ;
(2) C(x + r) ? C(x);
(3) There exists a subgradient sg(C, x) ? ?C(x) with ksg(C, x)k2 ?
?
N ?;
(4) Let v ? (1/?)[A? (y ? Ax) + sg(C, x)] ? ?kxk1 , and S(c2 ) ? {i ? [N ] : |vi | ? 1 ? c2 }.
Then, for any S ? ? [N ], |S ? | ? c3 N , we have ?min (AS(c2 )?S ? ) ? c4
(5) The maximum and minimum non-zero singular value of A satisfy c?1
? ?min (A)2 ?
5
2
?max (A) ? c5 .
?
Then krk2 ? N ?(?, c1 , . . . , c5 ). Further for any c1 , . . . , c5 > 0, ?(?, c1 , . . . , c5 ) ? 0 as ? ? 0.
Further, if ker(A) = {0}, the same conclusion holds under conditions 1, 2, 3, and 5.
3.2 Proof of Theorem 3
The proof is based on a series of Lemmas that are used to check the assumptions of Lemma 1
The next lemma implies that submatrices of A constructed using the first t iterations of the AMP
algorithm are non-singular (more precisely, have singular values bounded away from 0).
Lemma 2. Let S ? [N ] be measurable on the ?-algebra St generated by {z 0 , . . . , z t?1 } and
{x0 + A? z 0 , . . . , xt?1 + A? z t?1 } and assume |S| ? N (? ? c) for some c > 0. Then there exists
a1 = a1 (c) > 0 (independent of t) and a2 = a2 (c, t) > 0 (depending on t and c) such that
minS ? {?min (AS?S ? ) : S ? ? [N ], |S ? | ? a1 N } ? a2 , with probability converging to 1 as N ? ?.
We will apply this lemma to a specific choice of the set S. Namely, defining
vt ?
1
?t?1
(xt?1 + A? z t?1 ? xt ) ,
(3.2)
our last lemma shows convergence of a particular sequence of sets provided by v t .
Lemma 3. Fix
? ? (0, 1) and let the sequence {St (?)}t?0 be defined by St (?) ? i ? [N ] :
|vit | ? 1 ?? . For any ? > 0 there
exists t? = t? (?, ?) < ? such that, for all t2 ? t1 ? t? :
limN ?? P |St2 (?) \ St1 (?)| ? N ? = 0.
The last two lemmas imply the following.
Proposition 5. There exist constants ?1 ? (0, 1), ?2 , ?3 > 0 and tmin < ? such that, for any
t ? tmin , min ?min (ASt (?1 )?S ? ) : S ? ? [N ] , |S ? | ? ?2 N ? ?3 with probability converging to
1 as N ? ?.
Proof of Theorem 3. We apply Lemma 1 to x = xt , the AMP estimate and r = x
b ? xt the distance
from the LASSO optimum. The thesis follows by checking conditions 1?5. Namely we need to
show that there exists constants c1 , . . . , c5 > 0 and, for each ? > 0 some t = t(?) such that 1?5
hold with probability going to 1 as N ? ?.
Condition 1 holds since limN ?? hb
x, x
bi and limN ?? hxt , xt i for all t are finite.
Condition 2 is immediate since x + r = x
b minimizes C( ? ).
Conditions 3-4. Take v = v t as defined in Eq. (3.2). Using the definition (1.5), it is easy to check
that v t ? ?kxk1 . Further it can be shown that v t = (1/?)[A? (y ? Axt ) + sg(C, xt )], with sg(C, xt )
a subgradient satisfying limt?? limN ?? N ?1 ksg(C, xt )k2 = 0. This proves condition 3 and
condition 4 holds by Proposition 5.
Condition 5 follows from standard limit theorems on the singular values of Wishart matrices.
Acknowledgement
This work was partially supported by a Terman fellowship, the NSF CAREER award CCF-0743978,
the NSF grant DMS-0806211 and a Portuguese Doctoral FCT fellowship.
8
References
[AJ07]
G. Andrew and G. Jianfeng, Scalable training of l1 -regularized log-linear models, Proceedings of the 24th international conference on Machine learning, 2007, pp. 33?40.
[BBM10] M. Bayati, J .A. Bento, and A. Montanari, The LASSO risk: asymptotic results and real
world examples, Long version (in preparation), 2010.
[BM10a]
[BM10b]
M. Bayati and A. Montanari, The dynamics of message passing on dense graphs, with
applications to compressed sensing, Proceedings of IEEE International Symposium on
Inform. Theory (ISIT), 2010, Longer version in http://arxiv.org/abs/1001.3448.
, The LASSO risk for gaussian matrices, 2010, preprint available in
http://arxiv.org/abs/1008.2581.
[CD95]
S.S. Chen and D.L. Donoho, Examples of basis pursuit, Proceedings of Wavelet Applications in Signal and Image Processing III (San Diego, CA), 1995.
[CRT06]
E. Candes, J. K. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications on Pure and Applied Mathematics 59 (2006),
1207?1223.
[CT07]
E. Candes and T. Tao, The Dantzig selector: statistical estimation when p is much larger
than n, Annals of Statistics 35 (2007), 2313?2351.
[DMM09] D. L. Donoho, A. Maleki, and A. Montanari, Message Passing Algorithms for Compressed Sensing, Proceedings of the National Academy of Sciences 106 (2009), 18914?
18919.
[DMM10] D.L. Donoho, A. Maleki, and A. Montanari, The Noise Sensitivity Phase Transition in
Compressed Sensing, Preprint, 2010.
[GBS09]
D. Guo, D. Baron, and S. Shamai, A single-letter characterization of optimal noisy compressed sensing, 47th Annual Allerton Conference (Monticello, IL), September 2009.
[Joh06]
I. Johnstone, High Dimensional Statistical Inference and Random Matrices, Proc. International Congress of Mathematicians (Madrid), 2006.
[KKL+ 07] S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, A method for large-scale
?1 -regularized least squares., IEEE Journal on Selected Topics in Signal Processing 1
(2007), 606?617.
[KM10]
S. Korada and A. Montanari, Applications of Lindeberg Principle in Communications
and Statistical Learning, preprint available in http://arxiv.org/abs/1004.0557, 2010.
[KWT09] Y. Kabashima, T. Wadayama, and T. Tanaka, A typical reconstruction limit for compressed sensing based on lp-norm minimization, J.Stat. Mech. (2009), L09003.
[MM09] M. M?ezard and A. Montanari, Information, Physics and Computation, Oxford University Press, Oxford, 2009.
[RFG09] S. Rangan, A. K. Fletcher, and V. K. Goyal, Asymptotic analysis of map estimation via
the replica method and applications to compressed sensing, PUT NIPS REF, 2009.
[Tel99]
E. Telatar, Capacity of Multi-antenna Gaussian Channels, European Transactions on
Telecommunications 10 (1999), 585?595.
[Tib96]
R. Tibshirani, Regression shrinkage and selection with the lasso, J. Royal. Statist. Soc
B 58 (1996), 267?288.
[ZY06]
P. Zhao and B. Yu, On model selection consistency of Lasso, The Journal of Machine
Learning Research 7 (2006), 2541?2563.
9
| 4096 |@word version:2 pw:2 norm:3 owlqn:1 hu:1 simulation:7 seek:1 moment:2 initial:1 series:1 denoting:1 amp:8 ka:2 universality:2 dx:1 portuguese:1 numerical:4 happen:1 shamai:1 plot:6 greedy:1 selected:2 fpr:6 record:6 foreseeable:1 provides:3 characterization:3 ct07:4 node:1 complication:1 location:1 org:3 simpler:1 allerton:1 along:1 c2:3 constructed:1 symposium:1 consists:1 prove:4 x0:45 indeed:1 behavior:4 andrea:1 growing:1 bpdn:2 multi:1 inspired:2 decreasing:1 encouraging:1 lindeberg:1 increasing:2 provided:2 bounded:3 notation:1 mass:1 what:1 minimizes:1 developed:2 mathematician:1 finding:1 guarantee:2 pseudo:4 remember:1 axt:2 um:1 k2:6 rm:5 medical:4 omit:1 appear:1 enjoy:1 grant:1 positive:6 before:1 t1:1 congress:1 limit:7 consequence:1 despite:1 oxford:2 subscript:1 fluctuation:1 might:1 twice:2 doctoral:1 dantzig:2 limited:1 bi:2 statistically:1 practical:2 unique:4 goyal:1 ker:1 mech:1 empirical:2 universal:2 submatrices:1 significantly:1 convenient:2 boyd:1 word:1 suggest:1 cannot:1 clever:1 selection:4 romberg:2 close:1 put:2 risk:5 context:2 applying:2 ast:1 jbento:1 restriction:2 equivalent:1 measurable:1 map:1 straightforward:1 l:1 convex:4 independently:1 vit:1 recovery:3 immediately:2 pure:1 estimator:5 insight:1 array:1 proving:2 analogous:2 annals:1 construction:2 diego:1 exact:4 us:2 hypothesis:2 agreement:1 satisfying:2 particularly:1 kxk1:5 preprint:3 worst:1 wadayama:1 valuable:1 substantial:1 mentioned:1 complexity:2 ui:2 dynamic:1 ezard:1 weakly:2 mohsen:1 solving:3 algebra:1 basis:5 completely:1 resolved:1 k0:2 derivation:1 fast:1 describe:1 jianfeng:1 outcome:1 quite:2 stanford:6 larger:5 valued:1 say:1 otherwise:2 compressed:6 statistic:3 noisy:3 antenna:1 bento:2 sequence:17 kxt:3 advantage:1 took:1 reconstruction:1 product:1 relevant:2 unavoidably:1 rapidly:1 degenerate:1 academy:1 ky:1 achievement:1 convergence:3 empty:2 optimum:6 converges:3 tions:1 depending:1 andrew:1 stating:2 stat:1 eq:1 soc:1 implies:3 direction:3 correct:2 subsequently:1 kb:2 st1:1 fix:1 generalization:1 proposition:8 isit:1 strictly:1 hold:13 considered:2 normal:2 fletcher:1 mapping:2 algorithmic:1 a2:3 omitted:1 uniqueness:1 estimation:3 proc:1 applicable:1 dmm09:5 tool:1 minimization:1 rough:1 gaussian:9 rather:1 avoid:1 pn:6 shrinkage:1 broader:1 corollary:3 derived:1 ax:1 check:2 rigorous:5 tradition:1 medication:1 kim:1 whence:1 inference:1 inaccurate:1 going:1 tao:4 dual:1 denoted:1 priori:1 development:2 special:1 fairly:1 field:1 construct:2 equal:6 broad:3 yu:1 t2:4 terman:1 few:4 randomly:1 national:1 replaced:1 phase:1 argminx:1 delicate:1 n1:2 ab:3 message:4 devoted:2 accurate:2 closer:1 monticello:1 necessary:1 indexed:1 incomplete:2 initialized:1 plotted:1 guidance:1 instance:10 column:2 earlier:1 soft:1 denotep:1 ax0:3 km10:2 a6:2 cost:4 deviation:1 entry:19 subset:1 hundred:3 uniform:1 predictor:1 wonder:1 st:3 density:1 international:3 sensitivity:1 gorinevsky:1 physic:2 jos:1 discipline:1 together:1 concrete:1 thesis:1 wishart:1 zhao:1 coefficient:1 kvkp:1 satisfy:2 explicitly:1 depends:1 vi:5 multiplicative:1 view:1 lab:1 px0:5 start:1 candes:4 b6:1 square:12 ni:1 baron:1 il:1 variance:3 who:1 characteristic:1 ensemble:2 largely:1 yield:2 correspond:1 weak:1 produced:1 iid:7 kabashima:1 history:1 inform:1 definition:5 pp:1 dm:1 proof:15 proved:5 popular:1 lim:8 appears:1 follow:1 just:1 hand:2 ei:4 propagation:1 continuity:1 defines:1 k22:1 normalized:2 true:3 verify:1 ccf:1 evolution:2 regularization:4 hence:3 maleki:2 nonzero:1 indistinguishable:2 kyk2:1 uniquely:1 m:1 l1:3 ranging:2 image:3 recently:2 overview:1 tracked:1 korada:1 tpr:8 measurement:11 refer:3 consistency:1 mathematics:1 calibration:2 stable:1 longer:1 operating:1 yk2:1 etc:1 isometry:1 showed:1 recent:1 firmer:1 conjectured:1 moderate:1 occasionally:1 exported:1 binary:2 vt:1 analyzes:1 minimum:3 somewhat:1 surely:4 converge:1 x02:1 signal:7 ii:2 desirable:1 technical:1 characterized:1 offer:1 long:2 award:1 a1:3 converging:7 prediction:13 regression:2 scalable:1 essentially:2 metric:1 df:1 patient:2 arxiv:3 iteration:3 limt:3 invert:1 achieved:1 c1:6 addition:1 remarkably:1 fellowship:2 interval:1 singular:4 limn:6 rest:1 crt06:4 seem:1 spirit:1 structural:3 iii:1 easy:1 hb:1 lasso:22 competing:1 idea:1 epw:1 l09003:1 expression:9 effort:1 passing:4 useful:1 clear:1 tune:1 statist:1 simplest:1 http:3 exist:2 nsf:2 notice:3 sign:1 estimated:1 per:1 tibshirani:1 shall:1 putting:1 four:2 key:1 nevertheless:2 threshold:4 lustig:1 montanar:1 replica:2 asymptotically:3 subgradient:4 monotone:1 graph:1 package:1 letter:1 powerful:1 telecommunication:1 place:1 almost:4 family:2 throughout:1 comparable:1 bound:5 correspondence:1 annual:1 precisely:2 rangan:1 n3:1 aspect:5 u1:1 argument:3 optimality:1 min:16 fct:1 developing:1 according:1 conceptualization:1 smaller:1 reconstructing:2 rfg09:2 wi:1 lp:1 happens:1 axk2:1 intuitively:1 pr:2 koh:1 equation:2 describing:1 letting:1 vip:1 demographic:1 pursuit:3 available:4 gaussians:1 apply:2 observe:1 away:1 spectral:1 appropriate:2 original:1 denotes:3 spurred:1 assumes:1 standardized:4 graphical:2 prof:2 establish:1 classical:1 objective:1 already:4 quantity:1 dependence:1 said:1 september:1 distance:1 capacity:1 topic:2 argue:1 considers:1 trivial:2 fresh:1 kkl:2 length:1 modeled:1 cd95:2 mini:1 ratio:5 illustration:2 difficult:1 unfortunately:1 statement:2 negative:1 a2ij:1 policy:1 unknown:3 upper:2 observation:1 st2:1 finite:2 immediate:1 defining:1 witness:1 communication:3 tmin:2 rn:13 arbitrary:1 sharp:1 introduced:2 complement:1 namely:3 extensive:1 componentwise:1 connection:1 c3:1 c4:1 tanaka:1 nip:1 bar:3 challenge:1 max:1 royal:1 belief:1 regularized:3 residual:1 recursion:4 imply:1 carried:2 literature:2 understanding:1 sg:4 checking:1 acknowledgement:1 asymptotic:15 law:1 ksg:2 expect:2 interesting:1 versus:1 remarkable:2 bayati:4 consistent:1 thresholding:1 bk2:1 principle:2 famously:1 course:1 penalized:1 supported:1 last:2 transpose:1 enjoys:1 zy06:2 side:1 aij:1 institute:1 johnstone:1 characterizing:1 sparse:4 dimension:4 calculated:1 world:2 transition:1 made:1 c5:6 san:1 mm09:2 far:1 transaction:1 approximate:2 emphasize:2 selector:2 gene:6 xi:10 thep:1 continuous:5 iterative:2 additionally:1 channel:1 robust:3 ca:5 career:1 mse:11 european:1 did:1 main:8 montanari:7 linearly:1 dense:1 noise:4 repeated:1 complementary:1 ref:1 madrid:1 sub:3 explicit:3 lie:1 kxk2:1 krk2:2 wavelet:1 formula:1 theorem:21 specific:3 xt:17 maxi:1 r2:2 sensing:6 admits:1 evidence:1 exists:7 intrinsic:1 consist:1 false:3 hxt:1 hui:1 kx:1 chen:1 partially:1 scalar:2 corresponds:2 extracted:1 donoho:3 careful:1 lipschitz:5 typical:1 denoising:2 lemma:13 admittedly:1 called:2 hospital:4 select:1 support:2 guo:1 arises:1 tib96:2 preparation:1 evaluate:1 phenomenon:1 |
3,419 | 4,097 | Hallucinations in Charles Bonnet Syndrome Induced
by Homeostasis: a Deep Boltzmann Machine Model
David P. Reichert, Peggy Series and Amos J. Storkey
School of Informatics, University of Edinburgh
10 Crichton Street, Edinburgh, EH8 9AB
{d.p.reichert@sms., pseries@inf., a.storkey@} ed.ac.uk
Abstract
The Charles Bonnet Syndrome (CBS) is characterized by complex vivid visual
hallucinations in people with, primarily, eye diseases and no other neurological
pathology. We present a Deep Boltzmann Machine model of CBS, exploring
two core hypotheses: First, that the visual cortex learns a generative or predictive model of sensory input, thus explaining its capability to generate internal
imagery. And second, that homeostatic mechanisms stabilize neuronal activity
levels, leading to hallucinations being formed when input is lacking. We reproduce a variety of qualitative findings in CBS. We also introduce a modification to
the DBM that allows us to model a possible role of acetylcholine in CBS as mediating the balance of feed-forward and feed-back processing. Our model might
provide new insights into CBS and also demonstrates that generative frameworks
are promising as hypothetical models of cortical learning and perception.
1
Introduction
Complex visual hallucinations [1] can offer a fascinating insight into how the brain realizes visual
perception. The content of such hallucinations can be highly elaborate, consisting of people, animals, objects and whole scenes, and the images supposedly can ?exceed anything seen in real life?
in detail and vividness [1]. Attempts have been made to unify complex hallucinations in various
pathologies in one qualitative model, but many argue that the underlying causal mechanisms are too
varied to do so [2]. Of particular interest is the Charles Bonnet Syndrome (CBS) [3, 4, 5], where
patients experience complex visual hallucinations which appear not to be causally related to any
other impairment to mental health and where the primary pathology is one of loss of vision due to
eye diseases. Sensory deprivation is thus implicated as playing a key role in the development of
CBS, and comparisons have been made to phantom limb phenomena [3, 5].
The mechanisms behind complex hallucinations remain obscure. Theories of CBS are descriptive in
nature and no computational model exists. For example, hallucinations are attributed to ?perceptual
traces being released? [3] that would normally be inhibited by sensory input, or it is suggested that
experience in general is evoked by internally generated neuronal activity in distributed networks,
a ?neuromatrix?, imposing meaning on sensory input or onto unspecific input in the case of hallucinations [3, 5]. The phenomenon of internally generated images becomes less mysterious if one
assumes the cortex implements an actual generative model of sensory input. The hypothesis that
cortical learning is driven by prediction or reconstruction of sensory input is promising as it could
explain how the brain might learn in an unsupervised fashion, evaluating its internal model of the
world by matching predictions to actual input [6, 7]. Consequently, the idea that disorders including hallucinations are caused by mismatches between internally generated expectations and sensory
input has recently found interest in psychology [8, 9]. If generating internal imagery is an essential aspect of normal vision, then this could explain why hallucinations occur in so many different
pathologies, even sometimes when there is no direct malfunction of the visual system itself [1].
1
One modeling framework that implements unsupervised generative learning in a neural architecture
is the Deep Boltzmann Machine (DBM) [10]. DBMs have been developed in a machine learning
context, but we argue that they could model aspects of cortical learning and perception as well. They
are related to Hopfield networks, which have been used in the context of models of hallucinations
before [11]. However, whereas the latter model some abstract memory system, DBMs (and the
related Deep Belief Nets) learn hierarchical representations of data [7], thus capturing aspects of the
visual cortex [12], the locus where visual hallucinations are ultimately realized [1, 13]. We aim to
relate inference in a DBM to mechanisms of cortical perception.
We thus present a DBM model of the CBS, and propose a concrete mechanism that could lead to
hallucinations being formed: homeostasis. There is strong experimental evidence that homeostatic
processes serve to stabilize the activity level of neuronal populations through a variety of cellular
and synaptic mechanisms [14]. Moreover, deafferentiated cortex becomes hyper-excitable, and it
has been suggested before that this could be a result of homeostasis [15]. Hence, in CBS a lack
of visual input could lead to intrinsic excitability changes of neurons setting in to restore original
activity levels. In our model, we demonstrate how these changes could cause spontaneous ?complex?
hallucinations to be formed even when sensory input is lacking. These hallucinations are complex
in the sense that they involve learned, distributed representations of objects in (toy) images rather
than, for example, corresponding to local structural features of topographically organized cortical
areas, the latter being implicated in simpler hallucinations such as geometric patterns [16].
In Section 3, we first show that homeostasis can be beneficial in a DBM, enabling the model to
recover correct internal representations from degraded input. Then we move on to hallucinations.
The CBS is a complex phenomenon and differs considerably among patients, but we can qualitatively reproduce several aspects found in most or some cases: An initial latent period after loss of
vision that is free of hallucinations; a localization of hallucinations to lesioned parts of the visual
field (Section 3.1), potentially also explaining a tendency to see hallucinated objects too small for
their surroundings; and, effects of cortical lesions and cortical suppression of activity (Section 3.2).
Moreover, hallucinations in CBS tend to occur more often in states of drowsiness, implicating a
role of cholinergic and serotonergic systems [1]. By introducing a modification to the DBM model,
we can account for this by taking acetylcholine to modulate the bottom-up top-down balance of
information flow (Section 4). Finally, we speculate on the potential of the DBM to model other
pathologies and on the difference between hallucinations and mental imagery (Section 5).
2
Deep Boltzmann Machines
Boltzmann machines (BMs) [17] are closely related to Hopfield networks, which have been employed as models of hallucinations before (e.g. [18]). Both models consist of neural units x (here
binary with value 1 or 0, on or off) connected with symmetric weights W. A unit?s state is determined by a sigmoid activation function, and biases b control the excitability of each unit. The
overall state of the network evolves according to an ?energy? function, E(x) = ?xT Wx ? bT x,
where minima in the energy landscape correspond to attractor states. A BM differs from a Hopfield
net in two important points. First, a BM is stochastic in that the activation of a unit determines the
probability for it to switch on:
X
P (xi = 1|x) = ?(
wij xj + bi ) =
j
1 + exp(?
1
P
j
wij xj ? bi )
.
(1)
When the model is run by switching units on and off stochastically, it performs a random walk
in the energy landscape. Asymptotically, any state x will be assumed with probability P (x) ?
exp(?E(x)). Hence, a BM can be understood as modeling the probability distribution of the data
rather than just as a memory network, which is why these models are of interest in machine learning.
The second difference is the possible introduction of hidden units, separating x into visible units
v and hidden units h. Whereas the former represent visible variables such as the pixels of an
image, the latter represent latent variables that help to explain the image. Learning in a BM is
then performed with the aim of forming hidden representations from which data can be generated/predicted/reconstructed. In modeling P (x) for any x, not just data seen in training, one goal is
to learn latent representations that make it possible to generalize over novel data. Another goal is to
learn representations which can then be utilized further, for example for classification or clustering
2
(a)
(b)
(c)
(d)
Figure 1: Example images
from the data sets (blank set
not shown).
(a): Training set.
(b): Corrupted set.
(c): Noise set.
(d): Top half blank set.
[19]. The modeling context of a BM is thus rather different from that of a Hopfield network.
A Deep Boltzmann Machine (DBM) [10] is a BM with a special architecture (Figure 2a) consisting
of a visible layer and several subsequent hidden layers stacked on top of each other. To simplify
computations there are no lateral connections within any layer. When trained on a data set of images, each pair of adjacent layers is trained one at a time so that each layer learns to generate the
activations of the layer below, using only biologically plausible local Hebbian (and anti-Hebbian)
weight changes. See [10] for details.1 Furthermore, to make a more concrete connection to the
visual cortex, we impose a hierarchical receptive field structure on the model: Each layer?s units are
arranged topographically, and each unit?s weights are restricted so that it receives inputs only from
a square patch of units below. In detail, the model had 20x20 visible units corresponding to images
with 20x20 pixels, and three hidden layers of 26x26 units each. Each unit in the first hidden layer
received inputs from a 7x7 patch of visible units, whereas the higher layers received inputs from
half (13x13) and all (26x26) of the units of the respective lower layer. The training data set used
consisted of toy images (Figure 1a), containing individual shapes out of three categories (upwards
triangles, downward triangles, squares) at random positions.
2.1
Sampling and decoding the internal state
To model perception, we clamp the visible units to an image and sample the hidden units, starting
from the first hidden layer and proceeding to the topmost, then going downwards, and repeating this
cycle. For each layer, all units can be sampled in parallel. Processing across the cortical hierarchy
is suggested to be cyclic as well [20].
We are interested in the representations formed internally, in the hidden layers of the DBM, when
visual input is fixed or lacking in the case of CBS. To decode the states of the hidden layers, we define
a top-down projection to obtain a reconstructed image. Given the states xk of the hidden layer k in
question, the activations ak?1 of the layer below are computed taking only xk into account, ignoring
the states xk?2 further below.2 This process is repeated down to the visibles, so that we obtain a
reconstructed image which has been determined only from the states in layer k (using activations
instead of samples to obtain less noisy images). Note that we perform the top-down projection only
to inspect the internal states. For the actual inference procedure, all intermediate layers are always
sampled properly taking both adjacent layers into account.3 In this work, hidden states are always
initialized to zero for each image and evaluated after 40 sampling cycles.
To evaluate the quality of an internal representation when a shape image is presented to the model,
we compute the maximum value of the normalized cross-correlation of the projected reconstruction
with that image. In the case of hallucinations, internal representations are matched against all three
template shapes, taking the one that matches best as being represented and the corresponding crosscorrelation value as measure of the quality of the hallucination (Figure 2b).
2.2
Homeostasis in a DBM
Homeostatic mechanisms in the cortex are found to stabilize neuronal activity [14]. In our model, we
assume that neurons have an individual preferred activation level that is attained as representations
of inputs are learned. Hence, after the model has been trained we compute each unit?s activation
averaged over 40 sample cycles over all training images, taking this as the ?healthy? activation level
1
We used 1 step contrastive divergence for the greedy layer-wise training, which is an approximation to
maximum likelihood gradient ascent learning, and performed no further training of the full DBM. The training
set consisted of 60,000 images split into mini-batches of 100 and was iterated over through 30 epochs.
2
To compensate for the lack of bottom-up input in this case, the weights are doubled.
3
For computing the projections we use the original biases, not affected by homeostasis.
3
perception
hallucination
0.9
0.8
0.7
10 cycles
1 cycle
10 cls
20 cls
40 cls
(a) Model setup, perception & emerging hallucinations.
0.4
(b) Hallucination qualities.
Figure 2: (a) left-hand side: Setup of the model and perception. With the visible units (dark) clamped
to an image (bottom left), the hidden layer states assume representations of that image. Displayed
are the decoded projections for each layer after ten recurrent cycles (left column). (a) right-hand
side: After homeostatic regulation given empty images, hallucinations form spontaneously. Hallucinations are often stable after a few tens of recurrent cycles, but still fluctuate due to the stochastic
nature of the DBM. They are slightly less well formed in lower layers, which require higher layer
input to form stable shape perceptions. (b): Examples of hallucinations of different qualities (computed from cross-correlations with templates). 1.0 is perfect match with template.
under normal sensory input. To simulate CBS, we then blank the visual input and let the model
employ homeostatic regulation to recover healthy activation levels. The homeostatic mechanism is
implemented in a straightforward way, adjusting only the biases (as in [12]) to model changes to
intrinsic excitability of a neuron. Specifically, we present a mini-batch of 100 (corrupted or blank)
images for 40 cycles each to compute each unit?s new average activity ai , and with the preferred
activity pi modify the unit?s bias bi according to4
?bi = ?(pi ? ai ),
(2)
where ? is rate of change parameter (set to 0.1, but the precise value was not found to matter). This
was repeated for 1000 iterations, and ?b averaged over the population tended to zero before that
as original activity levels were restored. We note that similar mechanisms have been employed in
DBM-like models during training itself to enforce sparsity in the activations [12, 21], resulting in
V1 and V2 like receptive fields being learned [12]. However such sparsity is not focus of this paper.
3
Hallucinations emerging due to homeostasis
First, we demonstrate that homeostasis can be a beneficial mechanism in a DBM. To this end, we
present the trained model with heavily corrupted training images (Figure 1b) in which pixels where
turned off with probability 0.65, emulating pre-cortical damage to the visual input. We then computed the reconstruction quality (Section 2.1) from the reconstructions projected from the top hidden
layer states (after 40 cycles of sampling), and found it to be 0.46 on average. For comparison, average reconstruction quality on the uncorrupted training images was 0.98. The degradation of input
was also reflected in changes in mean activities of the layers (Figure 3a). We then applied homeostasis as described in Section 2.2. As the excitability of the units was adjusted, mean activity levels for
each hidden layer were gradually restored. At the same time, the reconstruction quality rose to about
0.9 (Figure 3a). Thus, a simple local activity stabilization can help alleviating damage to the system.
Note that due to excitation and inhibition being mixed in the weights of a BM, a lack or degradation
of input changes but not necessarily decreases activity. Homeostasis in a DBM thus restores activity
levels in some cases by increasing and in some cases by attenuating neuronal excitability.
To model the CBS which is often triggered by profound retinal damage, we then repeated the homeostasis experiment with blank images. Now, any formation of internal representations could be
regarded as hallucinations. The question was whether internal representations were stable and corresponded to ?objects? the model had learned, rather than random patterns. After all, the local changes
of excitability and the permanent loss of visual input could have interfered with the dynamics of internal representations. As shown in Figure 3b, the activity levels of the hidden layers were restored
4
Equation 2 is minimizing the cross entropy in between p and a [21].
4
0.1
0
200
600
400
iteration
0.8
0.7
0.6
0.5
0.2
layer 1
layer 2
layer 3
0.1
0.4
0.3
hallucination quality
layer 1
layer 2
layer 3
1.0
0.9
mean activity
0.2
reconstruction quality
mean activity
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0
200
600
400
iteration
0
(a) Corrupted input.
200
600
400
iteration
0
200
600
400
iteration
(b) Blank input.
Figure 3: (a): For corrupted images, homeostatic restoration of original activity levels (left figure,
dashed lines) in the three hidden layers, and recovering reconstruction quality (right figure). (b): For
blank images, restoration of activity (left) and qualities of emerging hallucinations (right). Each dot
marks one hallucination, plotted for 25 trials out of 100 per iteration. Blue curve is average quality.
by homeostasis. To analyze the internal representations of the model, we computed the qualities of
individual representations of the topmost hidden layer from the projected reconstructions (Figure
3b). Over an initial period, internal representations correspond to blank images even though activity
levels gradually improve. At some point however, hallucinations emerge, and relatively soon can
they reach high quality levels. At this point activities start to change more rapidly, hence hallucinations themselves contribute to the restoration of activity levels. These results are consistent with
CBS: If loss of vision is abrupt, hallucinations emerge after an initial latent period lasting hours to
days [5], which matches well the time scale on which homeostatic mechanisms take place [14].
Besides a peak of hallucination quality at 1.0, there are also numerous hallucinations of lower quality, which could be in line with CBS as there complex hallucinations are often mixed with simple,
less sophisticated ones. Also, some of the lower quality hallucinations are of transitory nature (if
run for 200 instead of 40 cycles, mean quality rose from 0.83 to 0.88). Still, in Section 4 we will
present mechanisms that lead to more stable hallucinations.
We also repeated the experiment with images containing random noise instead of being blank (Figure
1c) to simulate a different type of visual impairment than total blindness. We found in this case
that smaller overall bias shifts were necessary to restore original activity levels (not shown) and
produce hallucinations (Figure 5a). This shows that the exact nature of visual impairment could
have an impact on whether and when hallucinations are formed. Indeed, many CBS patients develop
hallucinations as vision degrades, but stop hallucinating when vision is finally lost completely [5].
In our model, this could be explained when one assumes there is a limit to how much neuronal
excitability can be adapted. Thus, as long as there is some input, even if it is unspecific noise,
hallucinations can be formed, but losing the input completely might require too much of a bias
shift. Another reason for the cessation of hallucinations over time could be input specific synaptic
plasticity, i.e. learning. If we were to train the model on empty images, it would learn to generate
those. Hence, homeostasis as a short-term stabilization mechanism could lead to hallucinations, but
a long-term reorganization of the cortex to represent the novel input would cause them to cease.
3.1
Localized hallucinations with localized lesions
Another property of the hallucinations was that the represented shapes were found to be distributed
over the whole image and could be any of the three categories (Figure 4a). This is of relevance as
complex hallucinations in CBS vary from episode to episode in a majority of patients [4]. Hence, it
is important that the model can form a variety of internal representations of learned images instead
of just converging to a few degenerate states.
Damage to the visual input leading to CBS can be restricted to parts of the visual field. Some
studies [5] report that hallucinations tend to be localized to the blind regions. To test whether we
could reproduce this, we repeated the homeostasis experiment with images from the training data set
in which only the top half had been blanked out (Figure 1d), simulating a localized impairment of
vision. As before, we found mean activities initially to be changed due to the degraded input and then
be restored after homeostatic regulation. To test whether hallucinations would form at any location,
we then tested the model on blank images. As shown in Figure 4b, the stable hallucinations were
5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
0
5
10
15
image row
(a) Full lesion.
20
10
hallucination count (1000)
hallucination count (1000)
hallucination count (1000)
3.5
trngle-up
trngle-dn
square
8
6
4
2
0
0
5
10
15
image row
20
(b) Top half lesion.
12
10
8
6
4
2
0
0
5
10
15
image column
20
(c) Right half lesion.
Figure 4: Localization (center of mass) of high quality (>0.95) hallucinations in projected images at
the end of homeostatic regulation. Counts in thousands out of 105 trials. (a): Although local hotspots
exist and squares are least likely to occur, overall hallucinations vary in type and location. (b): When
only the top half of visual input was blanked during homeostatic regulation, hallucinations emerged
localized to the ?blind? half. (c): When a region too narrow for triangles was blanked instead,
hallucinations were almost always squares.
located in the top half of the visual field only, corresponding to the region of the ?lesion?. Excitability
changes in the network are thus specific enough to have topographic properties.
This result lets us speculate on another phenomenon of CBS. In some cases, hallucinated objects
are seen as too small for their surroundings (?Lilliputian?)[5]. If hallucinations are constrained by a
blind area of restricted size, and there is a tendency to see whole objects (rather than parts), then this
would mean that hallucinated objects would have to be small simply to fit into the blind area, often
too small to fit the real surroundings (e.g. a tiny hallucinated person in a real room). We can test this
in our model: In the training data set, the square shape is less wide than the triangle shapes (Figure
1a). We repeated the last homeostasis experiment with the right half lesioned (9 pixels wide) instead
of the top half, meaning that now only a (hallucinated) square would fit into the blind area. Indeed,
we found that now, stable hallucinations are mostly squares by a large margin (Figure 4c), despite
the fact that for fully blank images and for top lesioned images, squares were by far less common.5
The network thus relied on hallucinations that fitted the blind region to restore its activity levels.
3.2
Cortical damage and suppression
Damage to the visual system causing complex hallucinations can also be cortical, e.g. resulting from
stroke. According to [1], such stroke damage needs to be located in earlier visual areas, whereas
it is higher, associative areas that are argued to be both necessary and sufficient for complex hallucinations. The interpretation is thus that the lack of bottom-up input somehow ?releases? activity
in higher areas. However, in [22] transcranial magnetic stimulation was applied to early cortical
areas of a CBS patient in a way thought to suppress cortical excitability. This led to a cessation of
hallucinations. The authors point out that this finding contradicts the release theory. Hence, if the
initial loss of vision is caused by damage to early cortical areas, complex hallucinations can form
over time. If on the other hand activation in early areas is suppressed when CBS symptoms have
already been developed due to for example eye disease, hallucinations cease, at least temporarily.
We reproduced these findings. Taking the first hidden layer as representing an early cortical area,
we repeat the homeostasis experiment with the hidden units in that layer clamped to activations
as if they were receiving no input (instead of clamping the visibles to blank images), simulating a
cortical lesion. Again we found stable hallucinations to emerge in the higher layers (not shown).
Then, to simulate the temporary suppression experiment in [22], we take the original model that
had homeostatic regulation applied with all hidden layers intact and developed hallucinations in the
process, and clamp the first hidden layer to see whether that would interfere with already established
hallucinations. We found that indeed, hallucinations ceased.
We note that due to the hierarchical receptive field structure in the model, the topmost hidden layer
plays a special role, its units having the largest receptive fields. We find that a DBM trained without
5
Square hallucinations were found to be least common after homeostasis over several model instances,
although this bias did not exist in generations from the original models. This shows that the homeostasis
induced model is not equivalent to the original one. While intriguing, we did not examine this effect further.
6
0.9
0.8
mean hallucination quality
mean hallucination quality
1.0
blank
noise
0.7
0.6
0.5
0.4
0.3
0.0
0.2 0.4 0.6 0.8 1.0
total homeost. adaptation
(a) Blank vs. noise images.
1.0
0.9
0.8
0.7
0.5
0.3
0.7
0.6
0.5
0.4
0.3
0.0
0.2 0.4 0.6 0.8 1.0
total homeost. adaptation
(b) ACh levels.
Figure 5: Comparison of hallucination emergence for blank vs. noise images (a) or various
values of ACh balance factor ? (b). Mean qualities are plotted against the total homeostatic
adaptation, defined as the absolute change of
biases averaged over all units. Both noise and
low ? lower the amount of adaptation necessary to elicit hallucinations. Low ? also increases average hallucination quality.
the topmost layer failed to learn a generative model of the toy shapes. Thus, the top-most layer or pair
of layers is necessary for generating complex hallucinations, as is the case with the higher associative
visual areas (although processing in the cortical hierarchy is of course much more complicated than
in our model). They are also sufficient in our model in so far as homeostasis did induce hallucinations
as long as the first hidden layer was clamped at the outset. However, as the last experiment has
shown, when hallucinations emerge due to activity changes in the whole system, then interfering
even with a lower layer can disrupt their formation.
We also argue that this is not merely a matter of the higher layers lacking unspecific input from
lower layers. When hallucinations are formed they evoke corresponding representations in all hidden
layers, even the lower ones that by themselves cannot support stable shape representations (Figure
2b). Thus, this is a result of recurrent interaction with the higher layers, likely contributing to
the stability of the overall internal state. Clamping the first hidden layer prevents such recurrent
stabilization. A role of recurrent interactions for hallucinations is also suggested in [23].
4
A novel model of acetylcholine and its role in CBS
CBS hallucinations are more likely to occur in states of drowsiness [1, 5]. This suggests a role
of cholinergic and serotonergic systems, which in turn are implicated in pathologies of complex
hallucinations other than CBS as well [1]. There is experimental evidence that acetylcholine (ACh)
acts specifically to emphasize sensory input over internally generated one, mediating ?the switching
of the cortical processing mode from an intracortical to an input-processing mode? [24]. Similarly,
ACh has been modeled to modulate the interaction in between bottom-up and top-down processing
[25], the former delivering sensory information, the latter prior expectations.
We present a new model of ACh in the DBM framework. We take the notion that ACh influences
the balance of bottom-up and top-down one step further, suggesting that in the hierarchical cortex
consisting of several processing stages, ACh could mediate this balance at any stage. In the DBM
model, each (intermediate) hidden layer receives input from a layer below, conveying sensory information, and from a layer above that has learned to generate or predict the former layer?s activity.
We thus take ACh to set the balance in between feed-forward and feed-back flow of information. To
this end, we introduce a balance factor ?[0, 1] so that an intermediate layer x(k) is sampled as
X
X
(k)
(k?1) (k?1)
(k+1) (k+1)
P (xi = 1|x(k?1) , x(k+1) ) = ?(
2?wji
xj
+
2(1 ? ?)wij
xj
), (3)
j
(.)
j
(.)
given states x and weights W above and below (biases omitted for brevity). Hence, ? = 1
equals maximal feed-forward flow of information, and ? = 0.5 recovers the normal sampling mode.
We model the effect of drowsiness on hallucinations in CBS as follows: We assume that drowsiness
is being reflected as a decrease in ACh, modeled as ? < 0.5 in both intermediate hidden layers.
As states of drowsiness are intermittent with periods of normal or increased vigilance (there is no
pathology of these aspects in CBS per se), we assume that on average, ACh levels are still balanced.
Hence, we repeat the original homeostasis experiment with bias shifts determined with ? = 0.5, but
at regular intervals test the model with ? = 0.3, reflecting temporary phases of drowsiness.
Results are displayed in Figure 5b. We find that with decreased levels of ACh, not only is a much
smaller homeostatic shift of excitability necessary to elicit hallucinations, but the average hallucination quality is also superior. For example, at a mean bias shift of 0.5, mean hallucination quality with
? = 0.3 is already much higher than with ? = 0.5 at maximal bias shift, whereas hallucinations
7
at balanced ACh levels have not even emerged yet at this point. This would thus correspond to a
situation where hallucinations would only occur during drowsiness. For comparison, we also did
the tests with an increased ACh level of ? = 0.7. In that case, hallucinations never emerge over the
course of the homeostatic process (the end of which is determined from activities computed with
? = 0.5). In summary, we found that a temporary change in the balance of feed-forward and feedback flow of information can have a profound effect on the emergence of hallucinations, yielding a
potential explanation for the role of drowsiness and ACh in CBS.
5
Discussion
We have reproduced a variety of findings related to CBS, and make two main predictions: First,
interfering with cortical homeostatic mechanisms after the loss of vision should delay or prevent
the development of hallucinations. Second, we suggest that acetylcholine could not only influence
the balance of thalamic and intracortical inputs [24], but also the balance in between bottom-up and
top-down at various stages of the cortical hierarchy. In CBS in particular, lack of acetylcholine at
cortical sites should correlate with the emergence of hallucinations.
Neurological pathologies other than CBS have been studied before in neural networks [11]. In [18]
schizophrenia is modeled with an approach akin to ours, with hallucinatory memories surfacing in
a Hopfield net due to homeostatic mechanisms that compensate for input degradation. However,
there the ?memories?, supposedly residing in prefrontal cortex, are accounted for much more abstractly, consisting of hard-coded random patterns. In our model, these unspecified memories can
be understood as learned latent representations in a hierarchical generative model of visual input.
The explicit image-based representations made it possible to investigate localized degradation of
visual input, and the hierarchical nature of the DBM allowed us to examine lesions and suppression
within the cortex, and to model acetylcholine as mediating the feed-forward/feed-back balance of
information flow. Moreover, the present work needs to be seen not just in the context of models
of specifically mental dysfunction, but also in the context of models attempting to capture general
principles of learning and perception in the visual cortex. Here, generative models of unsupervised
learning are promising as they can naturally account for the formation of internal imagery in health
and disease. We emphasize that the key aspect of a model of visual hallucinations is not that it
generates images, but that it spontaneously generates rich internal representations of images.
We only have used toy data. As current machine learning work sees DBMs applied to more and more
complex problems, more powerful demonstrations of complex hallucinations should be possible in
the future. Also, other hallucinatory pathologies could be explored, such as schizophrenia. One neurological abnormality implicated in the latter is a potential disconnection of different cortical regions
[26]. In the DBM, this could be modeled by decoupling different parts of the architecture, and incorporating other sensory hierarchies to account for the fact that visual hallucinations in schizophrenia
tend to come with auditory hallucinations, suggesting system wide interactions.
Another interesting issue is the nature of (non-hallucinatory) mental imagery. Why is the perceptual
quality of mental imagery so less salient than that of vivid hallucinations? We suggest that for mental imagery, representations are merely realized in higher areas that code for objects more abstractly,
whereas for vivid hallucinations they are realized throughout the whole system [13], and hence are
richer in information content. In the cortex, mechanisms such as in-built translation invariance (complex cell pooling) likely lead to some information not being represented in higher areas, something
not explicitly accounted for in our model. In that context it is thus very interesting to see recent
attempts [7, 27] at implementing biologically related mechanisms (such as lateral interactions) in
DBM-like models that could invert this information loss when generating images: The idea is that
higher layers only seed images in an approximate fashion, and lower areas sort out the details, by
aligning edges and so forth. Then, lower areas really would be needed to realize all information
entailed in rich perception, thus explaining the perceptual difference in between high level mental
imagery and system wide vivid visual hallucinations.
Acknowledgments
We would like to thank Nicolas Heess for helpful comments, Geoff Hinton for input on the mechanism underlying the ACh model (cf. [28]), and the EPSRC, MRC and BBSRC for funding.
8
References
[1] Manford, M. and Andermann, F. (1998) Complex visual hallucinations. clinical and neurobiological insights. Brain, 121, 1819?1840.
[2] Collerton, D., Perry, E., and McKeith, I. (2005) Why people see things that are not there: A novel perception and attention deficit model for recurrent complex visual hallucinations. Behavioral and Brain
Sciences, 28, 737?757.
[3] Schultz, G. and Melzack, R. (1991) The Charles Bonnet Syndrome: ?phantom visual images?. Perception,
20, 809?825, PMID: 1816537.
[4] Teunisse, R. J., Zitman, F. G., Cruysberg, J. R. M., Hoefnagels, W. H. L., and Verbeek, A. L. M. (1996)
Visual hallucinations in psychologically normal people: Charles Bonnet?s Syndrome. The Lancet, 347,
794?797.
[5] Menon, G. J., Rahman, I., Menon, S. J., and Dutton, G. N. (2003) Complex visual hallucinations in the
visually impaired: the Charles Bonnet Syndrome. Survey of Ophthalmology, 48, 58?72, PMID: 12559327.
[6] Rao, R. P. and Ballard, D. H. (1999) Predictive coding in the visual cortex: a functional interpretation of
some extra-classical receptive-field effects. Nature Neuroscience, 2, 79?87, PMID: 10195184.
[7] Hinton, G. E. (2010) Learning to represent visual input. Philosophical Transactions of the Royal Society
B: Biological Sciences, 365, 177?184.
[8] Friston, K. J. (2005) Hallucinations and perceptual inference. Behavioral and Brain Sciences, 28, 764?
766.
[9] Corlett, P., Frith, C., and Fletcher, P. (2009) From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology, 206, 515?530.
[10] Salakhutdinov, R. and Hinton, G. (2009) Deep Boltzmann machines. Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 5, pp. 448?455.
[11] Finkel, L. H. (2000) Neuroengineering models of brain disease. Annual Review of Biomedical Engineering, 2, 577?606.
[12] Lee, H., Ekanadham, C., and Ng, A. Y. (2008) Sparse deep belief net model for visual area V2. Advances
in Neural Information Processing Systems 20.
[13] Pollen, D. A. (1999) On the neural correlates of visual perception. Cerebral Cortex, 9, 4?19.
[14] Turrigiano, G. G. and Nelson, S. B. (2000) Hebb and homeostasis in neuronal plasticity. Current Opinion
in Neurobiology, 10, 358?364.
[15] Houweling, A. R., Bazhenov, M., Timofeev, I., Steriade, M., and Sejnowski, T. J. (2005) Homeostatic
synaptic plasticity can explain post-traumatic epileptogenesis in chronically isolated neocortex. Cereb.
Cortex, 15, 834?845.
[16] ffytche, D. H. and Howard, R. J. (1999) The perceptual consequences of visual loss: ?positive? pathologies
of vision. Brain, 122, 1247?1260.
[17] Hinton, G. E. (2007) Boltzmann machine. Scholarpedia, 2, 1668.
[18] Ruppin, E., Reggia, J. A., and Horn, D. (1996) Pathogenesis of schizophrenic delusions and hallucinations: a neural model. Schizophrenia Bulletin, 22, 105?123, PMID: 8685653.
[19] Hinton, G. E. and Salakhutdinov, R. R. (2006) Reducing the dimensionality of data with neural networks.
Science, 313, 504?507.
[20] Tsotsos, J. K., Rodriguez-Sanchez, A. J., Rothenstein, A. L., and Simine, E. (2008) The different stages
of visual recognition need different attentional binding strategies. Brain Research, 1225, 119?132.
[21] Nair, V. and Hinton, G. E. (2009) 3D object recognition with deep belief nets. Advances in Neural Information Processing Systems 22.
[22] Merabet, L. B., Kobayashi, M., Barton, J., and Pascual-Leone, A. (2003) Suppression of complex visual
hallucinatory experiences by occipital transcranial magnetic stimulation: A case report. Neurocase: The
Neural Basis of Cognition, 9, 436.
[23] Grossberg, S. (2000) How hallucinations may arise from brain mechanisms of learning, attention, and
volition. Journal of the International Neuropsychological Society, 6, 583?592.
[24] Sarter, M., Hasselmo, M. E., Bruno, J. P., and Givens, B. (2005) Unraveling the attentional functions of
cortical cholinergic inputs: interactions between signal-driven and cognitive modulation of signal detection. Brain Research. Brain Research Reviews, 48, 98?111, PMID: 15708630.
[25] Yu, A. J. and Dayan, P. (2002) Acetylcholine in cortical inference. Neural Networks: The Official Journal
of the International Neural Network Society, 15, 719?730, PMID: 12371522.
[26] Ellison-Wright, I. and Bullmore, E. (2009) Meta-analysis of diffusion tensor imaging studies in
schizophrenia. Schizophrenia Research, 108, 3?10, PMID: 19128945.
[27] Osindero, S. and Hinton, G. (2008) Modeling image patches with a directed hierarchy of Markov random
fields. Advances in Neural Information Processing Systems, 20.
[28] Hinton, G. E. (2006) Unsupervised learning for perception. NSERC Discorvery Grant Proposal, available
from the author.
9
| 4097 |@word trial:2 blindness:1 houweling:1 contrastive:1 initial:4 cyclic:1 series:1 ours:1 blank:15 current:2 activation:12 yet:1 intriguing:1 realize:1 subsequent:1 visible:7 wx:1 plasticity:3 shape:9 v:2 generative:7 half:10 greedy:1 intelligence:1 xk:3 core:1 short:1 mental:7 contribute:1 location:2 simpler:1 dn:1 direct:1 bazhenov:1 profound:2 qualitative:2 behavioral:2 introduce:2 indeed:3 themselves:2 examine:2 brain:11 salakhutdinov:2 volition:1 actual:3 increasing:1 becomes:2 psychopharmacology:1 underlying:2 moreover:3 matched:1 mass:1 unspecified:1 emerging:3 developed:3 finding:4 hypothetical:1 act:1 visibles:2 demonstrates:1 uk:1 control:1 normally:1 grant:1 internally:5 unit:27 appear:1 causally:1 before:6 positive:1 understood:2 local:5 modify:1 engineering:1 limit:1 consequence:1 switching:2 despite:1 kobayashi:1 ak:1 modulation:1 might:3 studied:1 evoked:1 suggests:1 bi:4 averaged:3 neuropsychological:1 grossberg:1 acknowledgment:1 horn:1 spontaneously:2 directed:1 lost:1 implement:2 differs:2 procedure:1 area:17 barton:1 elicit:2 drug:1 thought:1 matching:1 projection:4 pre:1 induce:1 outset:1 regular:1 suggest:2 doubled:1 onto:1 cannot:1 context:6 influence:2 equivalent:1 phantom:2 center:1 straightforward:1 attention:2 starting:1 occipital:1 survey:1 unify:1 disorder:1 abrupt:1 insight:3 regarded:1 population:2 stability:1 notion:1 spontaneous:1 hierarchy:5 heavily:1 decode:1 alleviating:1 exact:1 losing:1 play:1 hypothesis:2 storkey:2 recognition:2 utilized:1 located:2 bottom:7 role:8 epsrc:1 capture:1 thousand:1 region:5 connected:1 cycle:10 episode:2 decrease:2 disease:5 supposedly:2 topmost:4 rose:2 balanced:2 lesioned:3 dynamic:1 ultimately:1 trained:5 ellison:1 topographically:2 predictive:2 serve:1 localization:2 completely:2 triangle:4 basis:1 hopfield:5 geoff:1 various:3 represented:3 pseries:1 stacked:1 train:1 pmid:7 sejnowski:1 artificial:1 corresponded:1 hyper:1 formation:3 emerged:2 richer:1 plausible:1 bullmore:1 statistic:1 topographic:1 abstractly:2 emergence:3 itself:2 noisy:1 associative:2 reproduced:2 descriptive:1 triggered:1 turrigiano:1 net:5 reconstruction:9 propose:1 clamp:2 interaction:6 maximal:2 adaptation:4 steriade:1 causing:1 turned:1 rapidly:1 degenerate:1 forth:1 ceased:1 empty:2 ach:15 impaired:1 produce:1 generating:3 perfect:1 object:9 help:2 recurrent:6 ac:1 develop:1 school:1 received:2 strong:1 implemented:1 predicted:1 recovering:1 come:1 closely:1 correct:1 stochastic:2 stabilization:3 dbms:3 opinion:1 implementing:1 require:2 argued:1 really:1 biological:1 adjusted:1 exploring:1 neuroengineering:1 residing:1 wright:1 normal:5 exp:2 visually:1 seed:1 cognition:1 dbm:21 predict:1 fletcher:1 vary:2 early:4 released:1 omitted:1 realizes:1 healthy:2 homeostasis:21 largest:1 hasselmo:1 amos:1 hotspot:1 always:3 aim:2 rather:5 fluctuate:1 finkel:1 acetylcholine:8 unspecific:3 release:2 x26:2 focus:1 properly:1 likelihood:1 suppression:5 sense:1 helpful:1 inference:4 dayan:1 bt:1 initially:1 hidden:28 reproduce:3 wij:3 going:1 interested:1 pixel:4 issue:1 overall:4 among:1 classification:1 development:2 animal:1 restores:1 special:2 constrained:1 field:9 equal:1 never:1 having:1 ng:1 sampling:4 yu:1 unsupervised:4 future:1 report:2 simplify:1 inhibited:1 employ:1 primarily:1 few:2 surroundings:3 divergence:1 individual:3 phase:1 consisting:4 attractor:1 ab:1 attempt:2 detection:1 interest:3 highly:1 investigate:1 hallucination:113 cholinergic:3 entailed:1 yielding:1 behind:1 sarter:1 edge:1 necessary:5 experience:3 respective:1 walk:1 initialized:1 plotted:2 causal:1 isolated:1 fitted:1 instance:1 column:2 modeling:5 earlier:1 increased:2 rao:1 restoration:3 introducing:1 ekanadham:1 delay:1 osindero:1 too:6 corrupted:5 considerably:1 person:1 peak:1 international:3 lee:1 off:3 informatics:1 decoding:1 receiving:1 concrete:2 imagery:8 again:1 containing:2 vigilance:1 prefrontal:1 stochastically:1 cognitive:1 leading:2 crosscorrelation:1 toy:4 account:5 potential:3 suggesting:2 intracortical:2 retinal:1 speculate:2 coding:1 stabilize:3 matter:2 permanent:1 caused:2 explicitly:1 blind:6 scholarpedia:1 performed:2 analyze:1 start:1 recover:2 relied:1 capability:1 parallel:1 complicated:1 thalamic:1 sort:1 formed:8 square:10 degraded:2 correspond:3 conveying:1 landscape:2 generalize:1 bayesian:1 iterated:1 mrc:1 stroke:2 explain:4 reach:1 tended:1 ed:1 synaptic:3 against:2 energy:3 mysterious:1 pp:1 naturally:1 attributed:1 recovers:1 sampled:3 stop:1 auditory:1 adjusting:1 x13:1 dimensionality:1 organized:1 malfunction:1 sophisticated:1 back:3 reflecting:1 feed:8 higher:12 attained:1 day:1 reflected:2 leone:1 arranged:1 evaluated:1 though:1 symptom:1 furthermore:1 just:4 stage:4 biomedical:1 transitory:1 correlation:2 rahman:1 hand:3 receives:2 lack:5 rodriguez:1 cessation:2 somehow:1 interfere:1 mode:3 perry:1 quality:26 menon:2 effect:5 consisted:2 normalized:1 former:3 hence:10 excitability:10 symmetric:1 bbsrc:1 adjacent:2 during:3 dysfunction:1 anything:1 excitation:1 demonstrate:2 cereb:1 performs:1 upwards:1 image:50 meaning:2 wise:1 novel:4 recently:1 charles:6 funding:1 sigmoid:1 common:2 superior:1 ruppin:1 stimulation:2 functional:1 cerebral:1 interpretation:2 imposing:1 ai:2 similarly:1 bruno:1 pathology:10 had:4 dot:1 delusion:1 stable:8 cortex:15 inhibition:1 aligning:1 something:1 recent:1 cbs:32 inf:1 driven:2 meta:1 binary:1 life:1 uncorrupted:1 wji:1 transcranial:2 seen:4 minimum:1 impose:1 syndrome:6 employed:2 period:4 dashed:1 signal:2 full:2 hebbian:2 match:3 characterized:1 offer:1 cross:3 compensate:2 long:3 clinical:1 post:1 schizophrenia:6 coded:1 drowsiness:8 prediction:3 impact:1 converging:1 verbeek:1 patient:5 vision:10 expectation:2 iteration:6 sometimes:1 represent:4 psychologically:1 invert:1 cell:1 proposal:1 whereas:6 interval:1 decreased:1 extra:1 ascent:1 comment:1 induced:2 tend:3 pooling:1 sanchez:1 thing:1 flow:5 structural:1 abnormality:1 exceed:1 crichton:1 intermediate:4 split:1 enough:1 variety:4 switch:1 xj:4 psychology:1 fit:3 architecture:3 idea:2 shift:6 whether:5 hallucinating:1 akin:1 cause:2 deep:9 impairment:4 heess:1 delivering:1 involve:1 se:1 amount:1 repeating:1 dark:1 neocortex:1 ten:2 category:2 generate:4 exist:2 neuroscience:1 per:2 blue:1 vol:1 affected:1 key:2 salient:1 prevent:1 diffusion:1 v1:1 imaging:1 asymptotically:1 merely:2 tsotsos:1 run:2 powerful:1 place:1 almost:1 throughout:1 vivid:4 patch:3 capturing:1 layer:60 fascinating:1 annual:1 activity:29 adapted:1 occur:5 scene:1 x7:1 aspect:6 simulate:3 generates:2 bonnet:6 attempting:1 relatively:1 according:3 blanked:3 remain:1 beneficial:2 across:1 slightly:1 smaller:2 contradicts:1 suppressed:1 ophthalmology:1 evolves:1 modification:2 biologically:2 lasting:1 explained:1 restricted:3 gradually:2 equation:1 turn:1 count:4 mechanism:19 needed:1 locus:1 end:4 available:1 limb:1 schizophrenic:1 hierarchical:6 enforce:1 v2:2 magnetic:2 simulating:2 reggia:1 batch:2 reichert:2 original:9 assumes:2 top:16 vividness:1 clustering:1 cf:1 classical:1 society:3 tensor:1 move:1 question:2 realized:3 already:3 restored:4 receptive:5 primary:1 damage:8 degrades:1 strategy:1 unraveling:1 gradient:1 thank:1 deficit:1 separating:1 lateral:2 street:1 majority:1 attentional:2 nelson:1 argue:3 cellular:1 reason:1 besides:1 reorganization:1 modeled:4 code:1 mini:2 psychosis:1 balance:11 minimizing:1 demonstration:1 x20:2 mediating:3 setup:2 regulation:6 potentially:1 relate:1 mostly:1 trace:1 suppress:1 peggy:1 boltzmann:8 perform:1 inspect:1 neuron:3 markov:1 sm:1 howard:1 enabling:1 anti:1 displayed:2 situation:1 emulating:1 hinton:8 precise:1 neurobiology:1 varied:1 intermittent:1 homeostatic:18 david:1 pair:2 connection:2 philosophical:1 hallucinated:5 pollen:1 learned:7 narrow:1 temporary:3 eh8:1 hour:1 established:1 suggested:4 below:6 perception:15 mismatch:1 pattern:3 sparsity:2 built:1 including:1 memory:5 explanation:1 belief:3 royal:1 friston:1 restore:3 representing:1 improve:1 eye:3 numerous:1 excitable:1 health:2 epoch:1 geometric:1 prior:1 understanding:1 review:2 contributing:1 lacking:4 loss:8 fully:1 mixed:2 generation:1 interesting:2 localized:6 sufficient:2 consistent:1 principle:1 lancet:1 playing:1 pi:2 interfering:2 translation:1 obscure:1 tiny:1 row:2 changed:1 course:2 repeat:2 last:2 free:1 soon:1 summary:1 implicated:4 accounted:2 bias:12 side:2 disconnection:1 explaining:3 template:3 taking:6 wide:4 emerge:5 absolute:1 sparse:1 bulletin:1 edinburgh:2 distributed:3 curve:1 feedback:1 cortical:24 evaluating:1 world:1 rich:2 sensory:13 forward:5 made:3 qualitatively:1 projected:4 author:2 bm:8 schultz:1 far:2 correlate:2 transaction:1 reconstructed:3 approximate:1 emphasize:2 preferred:2 neurobiological:1 evoke:1 chronically:1 assumed:1 andermann:1 xi:2 disrupt:1 latent:5 why:4 dutton:1 promising:3 ballard:1 nature:7 learn:6 nicolas:1 decoupling:1 ignoring:1 frith:1 pathogenesis:1 complex:22 cl:3 necessarily:1 official:1 did:4 aistats:1 main:1 whole:5 noise:7 serotonergic:2 mediate:1 arise:1 lesion:8 repeated:6 allowed:1 neuronal:7 site:1 elaborate:1 fashion:2 downwards:1 hebb:1 pascual:1 position:1 decoded:1 explicit:1 clamped:3 perceptual:5 learns:2 deprivation:2 down:7 xt:1 specific:2 explored:1 cease:2 evidence:2 exists:1 essential:1 intrinsic:2 consist:1 incorporating:1 downward:1 interfered:1 margin:1 clamping:2 entropy:1 traumatic:1 led:1 simply:1 likely:4 forming:1 visual:42 failed:1 prevents:1 nserc:1 temporarily:1 neurological:3 binding:1 determines:1 nair:1 modulate:2 goal:2 attenuating:1 consequently:1 room:1 content:2 change:13 hard:1 determined:4 specifically:3 reducing:1 degradation:4 total:4 invariance:1 experimental:2 tendency:2 intact:1 internal:17 people:4 mark:1 latter:5 support:1 brevity:1 relevance:1 evaluate:1 tested:1 phenomenon:4 |
3,420 | 4,098 | Nonparametric Density Estimation for Stochastic
Optimization with an Observable State Variable
Lauren A. Hannah
Duke University
Durham, NC 27701
[email protected]
Warren B. Powell
Princeton University
Princeton, NJ 08544
[email protected]
David M. Blei
Princeton University
Princeton, NJ 08544
[email protected]
Abstract
In this paper we study convex stochastic optimization problems where a noisy
objective function value is observed after a decision is made. There are many
stochastic optimization problems whose behavior depends on an exogenous state
variable which affects the shape of the objective function. Currently, there is no
general purpose algorithm to solve this class of problems. We use nonparametric
density estimation to take observations from the joint state-outcome distribution
and use them to infer the optimal decision for a given query state s. We propose
two solution methods that depend on the problem characteristics: function-based
and gradient-based optimization. We examine two weighting schemes, kernel
based weights and Dirichlet process based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product
newsvendor problem and the hour ahead wind commitment problem. Our results
show that in some cases Dirichlet process weights offer substantial benefits over
kernel based weights and more generally that nonparametric estimation methods
provide good solutions to otherwise intractable problems.
1
Introduction
In stochastic optimization, a decision maker makes a decision and faces a random cost based on
that decision. The goal is to choose a decision that minimizes the expected cost using information
from previous observations. Stochastic optimization problems with continuous decision spaces have
many viable solution methods, including function averaging and stochastic gradient descent [20].
However, in many situations conditions for the previous observations may not be the same as the
current conditions; the conditions can be viewed as state variables. There are currently no general
purpose solution methods for stochastic optimization problems with state variables, although they
would be useful for finance, energy, dynamic pricing, inventory control and reinforcement learning
applications.
We consider the newsvendor problem, a classic inventory management problem, to illustrate existing
solution methods for stochastic optimization problems with state variables and their limitations.
Here, newspapers can be bought in advance for cost c, and up to D of them can be sold for price
p, where D is a random demand; the goal is to determine how many papers should be ordered so
as to maximize the expected profit. A state variable that contains information about the random
demand may also be included. For example, a rainy forecast may correlate to a lower demand while
a sunny forecast may correlate to a higher. A natural solution method would be to partition the
previous observations into ?rainy? and ?sunny? bins, and then solve the problem for each partition.
This essentially models the problem as a single time period Markov Decision Process and solves
the problem accordingly [16, 21]. Partitioning methods work when the state space can take a small
number of discrete values.
1
Two problems arise with partitioning methods when the state space becomes larger. First, the number of states grows exponentially with the dimension of the state space. If there are 10 attributes,
like weather, stock prices, days until an election, etc, and each can take 100 values, then there will
be 1020 individual states. Second, previous observations are sparse over these states; a vast number
of observations must be gathered before there are enough to make a reasonable decision for a given
state. Rather than partitioning, we propose using observations from ?similar? states to create a deterministic decision-expected cost function, also called an objective function, that is conditioned on
a particular state.
Similar methods have been proposed in an approximate dynamic programming setting that use basis
functions, such as linear and polynomial predictors, to construct approximate value functions [22,
14]. Basis functions, however, are hard to choose manually and automatic selection is an area of
active research [12]. Moreover, basis functions do not guarantee that the approximate objective
function is convex in the decision.
We propose using nonparametric density estimation for the joint state and outcome distribution to
group observations from ?similar? states with weights. These are then used to construct deterministic, convex approximations of the noisy function given the current observed information. The
results are a deterministic, convex math program. These can be efficiently solved by a number of
commercial solvers, even with very large decision spaces (10 to 1,000+ variables and constraints).
We give two methods to construct an approximate objective function using previous observations.
The first is a function-based method. In some cases, entire random objective functions can be viewed
retrospectively. For example, if the demand is known in the newsvendor problem, then the value of
all decisions is also known. In these particular cases, the approximate objective function is modeled as a weighted average of the observed functions. The second method is based on stochastic
gradients. In some cases, it is not possible to observe entire functions or observed functions may
be too complex to manipulate. When this happens, we propose constructing a separable, piecewise
linear approximate objective function. A piecewise linear, convex function is created in each decision dimension by generating a slope function from a weighted, order-restricted regression of the
gradients, and then integrating that function. The result is an approximate objective function that is
not necessarily the same as the original objective function, but one that has the same minima.
Both methods depend heavily on weights to capture dependence between the state and the outcome.
We propose two weighting schemes: kernels weights and Dirichlet process mixture model weights.
Kernels are simple to implement, but Dirichlet process mixture models have certain appealing properties. First, they act as a local bandwidth selector across the state space; second, the weights are
generated by partitions rather than products of uni-dimensional weights, so the results scale better
to higher-dimensional settings.
We contribute novel algorithms for stochastic optimization problems with a state variable that work
with large, continuous decision spaces and propose a new use of Dirichlet process mixture models.
We give empirical analysis for these methods where we show promising results on test problems.
The paper is organized as follows. In Section 2, we review traditional function-based and gradientbased optimization methods and in each case present novel algorithms to accommodate an observable state variable. We present an empirical analysis of our methods for synthetic newsvendor data
and the hour ahead wind commitment problem in Section 4 and a discussion in Section 5.
2
Stochastic optimization for problems with an observable state variable
Traditional stochastic optimization problems have the form
min E [F (x, Z)] ,
x?X
(1)
where x ? Rd is the decision, Z : ? ? ? is a random outcome, X is a decision set and F (x, Z(?))
is a random objective function [20]. In the newsvendor problem, which we will use as a running
example, x is the stocking level and Z is the random demand. Given x and Z(?), F is deterministic.
When a state variable is inlcuded, we first observe a random state S ? S that may influence F and
the distribution of Z, then we make a decision x, and finally we observe the random variable Z. Eq.
(1) becomes
min E [F (x, s, Z)|S = s] .
(2)
x?X
2
Traditional stochastic optimization techniques require us to sample from the conditional distribution
of p(Z|S = s), treating each state observation independently [20]. We will use nonparametric
density estimation for the joint distribution of (S, Z) to take into account that similar values of S
affect Z and F in a similar way. We now describe new methods for function-based and gradientbased optimization for problems with an observable state variable.
2.1
Function-based optimization with an observable state variable
Function-based optimization is used when a single outcome ? can tell us the value of all decisions
given that outcome [19]. For example, in the newsvendor problem, if the demand is known then
the value of all inventory levels is known. Function-based optimization relies on sampling a set of
scenarios, ?1 , . . . , ?n from ?, to approximate Eq. (1):
n
min
x?X
1X
F (x, Z(?i )).
n i=1
(3)
Since Eq. (3) is deterministic given ?1:n , deterministic solution methods can be used. These methods are well developed and are implemented in a variety of commercial solvers.
When a state variable is introduced, we wish to solve Eq. (2) for a fixed query state s ? S. However,
scenarios are not i.i.d. from the distribution p(Z|S = s), but rather from the joint distribution
(p(Z, S). Let (Si , Z(?i+1 ))n?1
i=0 be a set of n observations. Instead of taking a naive average of the
observations as in Eq. (3), we weight the observations based on the distance between the query state
Pn?1
s and each observation Si with weight wn (s, Si ). The weights must sum to 1, i=0 wn (s, Si ) = 1,
and the weights may change with the number of observations, n. Set
F?n (x|s) =
n?1
X
wn (s, Si )F (x, Si , Z(?i+1 )).
(4)
min F?n (x|s).
(5)
i=0
The optimization problem becomes
x?X
Note that because F (x, Si , Z(?i+1 )) is convex in x for every Si and ?i+1 , F?n (x|s) is convex and
Eq. (5) can be solved with a commercial solver. We discuss weight functions in Section 3.
2.2
Gradient-based optimization with an observable state variable
In gradient-based optimization, we no longer observe an entire function F (x, S, Z(?)), but only a
derivative taken at x,
? i , s, ?i+1 ) = ?x F (xi , s, Z(?i+1 )).
?(x
(6)
Stochastic approximation is the most popular way to solve stochastic optimization problems using a
gradient; it modifies gradient search algorithms to account for random gradients [17, 9]. The general
idea is to optimize x by iterating,
xn+1 = ?X (xn ? an ?x F (xn , Z(?n+1 )) ,
(7)
where ?X is a projection back into the constraint set X , ?x F (xn , Z(?n+1 )) is a stochastic gradient
at xn and an is a stepsize. Other approaches to gradient-based optimization have included construction of piecewise linear, convex functions to approximate F (x) in the region where x is near the
optimal decision, x? [15].
Including a state variable into gradient-based optimization is less straightforward than it is for
function-based optimization. We run into difficulties because we choose xn given Sn . When we
include state Sn , the decision xn is based on the state Sn . But xn?1 depends on Sn?1 , so no iterative procedure like Eq. (7) can be used. Moreover, constructing the approximate function F?n (x|s)
is not trivial because the stochastic gradients depend on both xn and Sn .
Therefore, we propose modeling F (x|s) with a piecewise linear, convex, separable approximation.
Even if F (x|s) is not itself separable, we aim to approximate it with a simpler (separable) function
that has the same minimum for all fixed s. Approximating the minimum is easier than approximating
3
?
1.5
2
1.0
1
weight
0.3
0.5
0.5
0.4
0
response
Stochastic Gradient
1.5
?0.5
?1
?1.5
0.5
0.0
0.6
0.7
?0.5
0.8
?2
0
?1.0
0.2
0.9
1
0.4
?
0.8
?1.5
0.6
0.6
1.0
0.4
0.8
0.2
1
0
State Variable
Decision Variable
0.0
0.2
0.4
0.6
0.8
1.0
decision
?
1.5
0.25
1.0
weight
0.3
0.20
0.4
0.5
0.0
value
response
0.5
0.6
0.7
?0.5
0.10
0.8
?1.0
0.15
0.9
?
?1.5
0.05
1.0
0.00
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
decision
0.4
0.6
0.8
decision
Figure 1: A graphical depiction of gradient-based method in one dimension for a maximization problem.
(Top left) Observe gradients, state. (Top right) Weight observations based on state. (Bottom left) Fit isotonic
regression to weighted slopes. (Bottom right) Integrate isotonic regression to form fnk (xn |Sn ).
the entire convex function [4, 15]. Moreover, convex regression is easier in one dimension than
multiple dimensions. We approximate E[F (x, s, Z)] by a series of separable functions,
F?n (x|s) =
d
X
fnk (xk |s),
k=1
where xk is the k th component of x. We enforce convexity restrictions on fnk (x|s) for every s ? S.
Unlike the function-based method, the gradient-based method is a fundamentally online algorithm:
xn is used to choose xn+1 . Given Sn , we choose xn as follows,
xn = arg min
x?X
d
X
fnk (xk |Sn ).
k=1
? n , Sn , ?n+1 ). The observations (xi , Si , ?(x
? i , Si , ?i+1 ))n?1 are used to upWe then receive ?(x
i=0
?
date Fn (x|s) sequentially. Fix k ? {1, . . . , d}; we want to construct a piecewise linear fnk (x|s)
d k
by constructing an increasing slope function, vnk (x|Sn ) = dx
fn (x|Sn ) based on the stochastic gradient observations, ??1:n . We use weights to group the gradients from states ?similar? to Sn and a
weighted isotonic (order restricted) regression to construct vnk (x|Sn ). Order the decision observations xk[0] , . . . , xk[n?1] , and then solve to find slopes for the decision-ordered space,
vnk (x0:n?1 |Sn ) = arg min
v
n?1
X
wn Sn , S[i]
? k , S[i] , ?[i+1] ) ? v[i]
?(x
[i]
2
,
(8)
i=0
subject to : v[i?1] ? v[i] , i = 1, . . . , n ? 1.
First vnk (x|Sn ) is generated by interpolating the point estimates from Eq. (8) across the k th dimension of the decision space, and then f k (x|Sn ) is created by integrating vnk (x|Sn ). The monotonicity
of vnk (x|Sn ) ensures the convexity of fnk (x|Sn ). See Figure 1 for an example. The general method
for constructing F?n (x|s) is as follows:
4
1. Observe Sn and constructing weights ((wn (Sn , Si ))n?1
i=0 ,
n?1
2. Use the weights wn (Sn , Si )i=0 , previous decisions x0:n?1 and gradients to construct
k
slopes v1:K
(Sn ) with Eq. (8),
3. Reconstruct f k (x|Sn ) from the slopes and construct F? (x|Sn ) from (f k (x|Sn ))dk=1 , and
xn = arg min F?n (x|Sn ).
4. Choose xn given F? (x|Sn ):
x?X
Details are given in the supplementary material. We now discuss the choice of weight functions.
3
Weight functions
Like the choice of step size in stochastic approximation, the choice of weight functions in Eqs. (4)
and (8) determines whether and under which conditions function-based and gradient-based optimization produce acceptable results. Weighting functions rely on density estimation procedures to
approximate the conditional density f (z|s), where s is the state and z is the response. Conditional
density estimation weights observations from a joint distribution to create a conditional distribution.
We use this to obtain weights from two nonparametric density estimators, kernels and Dirichlet
process mixture models.
3.1
Kernel weights
Kernel weights rely on kernel functions, K(s), to be evaluated at each observation to approximate
the conditional density. A common choice for K with continuous covariates is the Gaussian kernel,
Kh (s) = (2?h)?1/2 exp{?s2 /2h}, where the variance h is called the bandwidth. Kernel weights
have the advantage of being simple and easy to implement. The simplest and most universally
applicable weighting scheme is based on the Nadaraya-Watson estimator [10, 23]. If K(s) is the
kernel and hn is the bandwidth after n observations, define
n?1
X
wn (s, Si ) = K ((s ? Si )/hn ) /
K ((s ? Sj )/hn ) .
j=0
Kernel estimators require a well sampled space, are poor in higher dimensions and highly sensitive
to bandwidth size [5].
3.2
Dirichlet process weights
One of the curses of dimensionality is sparseness of data: as the number of dimensions grows, the
distance between observations grows exponentially. In kernel regression, this means that only a
handful of observations have weights that are effectively non-zero, producing non-stable estimates.
Instead, we would like to average responses for ?similar? observations. We propose modeling the
distribution of the state variable with a Dirichlet process mixture model, which is then decomposed
into weights.
Dirichlet process mixture models. A mixture model represents a distribution,
P?g(s), as a weighted
infinite sum of simpler distributions, g(s | ?i ), parameterized by ?i , g(s) = i=1 pi g(s | ?i ). Here,
pi is the mixing proportion for component i. We can use a Dirichlet process (DP) with base measure
G0 and concentration parameter ? to place a distribution over the joint distribution of (pi , ?i ), the
mixture proportion and location of component i [6, 1]. Assume that data S1 , . . . , Sn are iid with a
distribution that is modeled by a mixture over distribution G(?),
P ? DP (?, G0 ),
?i |P ? P,
Si |?i ? G(?i ).
(9)
The distribution P drawn from a Dirichlet process is an almost surely discrete measure over parameters, with the mixture proportion associated with ? as the atomic weight. The hidden measure P in
Eq. (9) can be integrated out to obtain a conditional distribution of ?n |?1:n?1 [3]
n?1
X
1
?
?n | ?1 , . . . , ?n?1 ?
??i +
G0 .
(10)
? + n ? 1 i=1
?+n?1
Here, ?? is the Dirac measure with mass at ?. Eq. (10) is known as a Polya urn posterior; the variable
?n has positive probability of assuming the value of one of the previously observed ?i , but it also
can take a new value drawn from G0 with positive probability. The parameter ? controls how likely
?n is to take a new value. We now discuss how weights can be constructed from Eq. (9).
5
Dirichlet process mixture model weights. A Dirichlet process mixture model can be used to
model an unknown density, but it can simultaneously be used to produce a distribution of the partition structure of observed data [13, 8]. This is shown in the Polya urn posterior of Eq. (10); each
hidden parameter has positive probability of taking the same value as another parameter. If two
hidden parameters have the same value, they are in the same partition/cluster. The partition structure
induces weights on the observations, proportional to 1 if they are in the same cluster, 0 if not.
Let p = {C1 , . . . , Cn(p) } be the partition of the observations {1, . . . , n}. Here Ci = {j : ?j = ?i? }
?
is the partition set generated by n(p) unique parameter values, denoted ?1? , . . . , ?n(p)
. Now suppose
that we know the partition p. Given p, we include the query state s into cluster Ci with probability
Z
ps (Ci |p) = P(s ? Ci | p, S1:n ) ? |Ci | g(s | ?? )dHCi (?? ),
where |Ci | is the number of elements in Ci , and HCi (?? ) is the posterior distribution of ?? conditioned on G0 and the set of observations {Sj : Sj ? Ci }. Given p, the weighting function is the
probability that the hidden parameter for s would be ?i , the hidden parameter for Si ,
n(p)
wn (s, Si ) | p =
X ps (Cj | p)
1{Si ?Cj } .
|Cj |
j=1
(11)
Eq. (11) is conditioned on a partition structure, but the Dirichlet process produces a distribution over
partition structures. Let ?(p) be the prior distribution for partitions p and ?(p|S0:n?1 ) the posterior.
Integrating of the partition posterior, we obtain unconditional weights,
(m)
M n(p
X ps (Cj | p)
1 X X
1{Si ?Cj } ?
?(p|S1:n )
|Cj |
M m=1 j=1
j=1
n(p)
wn (s, Si ) =
X
p
)
ps (Cj | p(m) )
1{Si ?Cj } .
|Cj |
(12)
It is infeasible to integrate over all of the partitions; therefore, we approximate Eq. (12) by performing a Monte Carlo integration with M posterior partition samples, (p(m) )M
m=1 . We obtain
(p(m) )M
by
generating
M
iid
samples
of
the
hidden
parameters,
?
,
from
the
posterior of Eq.
0:n?1
m=1
(9) with Gibbs sampling [11].
4
4.1
Empirical analysis
Multi-product constrained newsvendor problem
A multi-product newsvendor problem is a classic operations research inventory management problem. In the two product problem, a newsvendor is selling products A and B. She must decide how
much of each product to stock in the face of random demand, DA and DB . A and B can be be bought
for (cA , cB ) and sold for (pA , pB ), respectively. Any inventory not sold is lost. Let (xA , xB ) be the
stocking decisions for A and B respectively; it is subject to a budget constraint, bA xA + bB xB ? b,
and a storage constraint, rA xA +rB xB ? r. An observable state S = (S1 , S2 ) contains information
about DA and DB . The problem is,
max ? cA xA ? cB xB + E [pA min (xA , DA ) + pB min (xB , DB ) | S = s]
xA , xB
subject to : bA xA + bB xB ? b,
(13)
rA xA + rB xB ? r.
We generated data for Problem (13) in the following way. Demand and two state variables were
generated in a jointly trimodal Gaussian mixture.The following methods were compared.
Function-based with kernel and Gradient-based with kernel. Bandwidth is selected according to
the ?rule of thumb? method of the np package for R, hj = 1.06?j n?1/(4+d) , where ?j is defined as
min(sd, interquartile range/1.349) [7].
Function-based with DP and Gradient-based with DP. We used the following hierarchical model,
P ? DP (?, G0 ),
2
?i = (?i,s , ?i,s
)|P ? P,
6
2
Si,j |?i ? N (?i,s,j , ?i,s,j
), j = 1, 2.
Two Product Newsvendor
Algorithm
Kernel
Gradient?Based
DP
Gradient?Based
Kernel
Function?Based
DP
Function?Based
16
Value
14
12
10
8
6
Optimal
20
40
60
80
100
Number of Observations
Figure 2: Gradient-based and function-based methods as a function of number of data points sampled. Results
are averaged over 100 test problems with observed demand.
Posterior samples were drawn using Gibbs sampling with a fully collapsed sampler run for 500
iterations with a 200 iteration burn-in with samples taken every 5 iterations.
Optimal. These are the optimal decisions with known mixing parameters and unknown components.
Results. Decisions were made under each regime over eight sample paths; 100 test state/demand
pairs were fixed and decisions were made for these problems given the observed states/decisions
in the sample path for each method. Results are given in Figure 2. The kernel and Dirichlet process weights performed approximately equally for each method, but the function-based methods
converged more quickly than the gradient-based methods.
4.2
Hour ahead wind commitment
In the hour ahead wind commitment problem, a wind farm manager must decide how much energy
to promise a utility an hour in advance, incorporating knowledge about the current state of the world.
The decision is the amount of wind energy pledged, a scalar variable. If more energy is pledged than
is generated, the difference must be bought on the spot market, which is expensive with a price that is
unknown when the decision is made; otherwise, the excess is lost. The goal is to maximize expected
revenue. The observable state variable is the time of day, time of year, wind history from the past
two hours, contract price and current spot price,
TiD
PiS
Wi?1
Si
xi
=
=
=
=
=
time of day,
current spot price,
wind speed an hour ago,
observable state variable
amount of energy pledged,
TiY
PiC
Wi
Yi+1 (x)
=
=
=
=
=
time of year,
contract price,
current wind speed,
(TiD , TiY , PiC , PiS , Wi , Wi?1 ),
S
PiC x ? Pi+1
max (x ? Wi+1 , 0).
S
The revenue that the wind farm receives, Yi+1 (x), depends on the variables Pi+1
and Wi+1 , which
are not known until the next hour. We used wind speed data from the North American Land Data Assimilation System with hourly observations from 2002?2005 in the following locations: Amarillo,
TX. Latitude: 35.125 N, Longitude: 101.50 W. The data have strong daily and seasonal patterns.
The mean wind level is 186.29 (m/s)3 with standard deviation 244.86. Tehachapi, CA. Latitude:
35.125 N, Longitude: 118.25 W. The data have strong seasonal patterns. The mean wind level is
89.45 (m/s)3 with standard deviation 123.47.
Clean spot and contract price data for the time period were unavailable, so contract prices were
generated by Gaussian random variables with a mean of 1 and variance of 0.10. Spot prices were
generated by a mean-reverting (Ornstein-Uhlenbeck) process with a mean function that varies by
time of day and time of year [18]. The data were analyzed separately for each location; they were
divided by year, with one year used for training and the other three used for testing. The following
methods were compared on this dataset:
Known wind. The wind is known, allowing maximum possible commitment, xi = Wi+1 (?i+1 ). It
serves as an upper bound for all of the methods.
7
M ETHOD /L OCATION
T EHACHAPI , CA
K NOWN W IND
F UNCTION WITH K ERNEL
F UNCTION WITH DP
I GNORE S TATE
A MARILLO , TX
K NOWN W IND
F UNCTION WITH K ERNEL
F UNCTION WITH DP
I GNORE S TATE
2002
2003
2004
2005
97.5
78.8 (80.8%)
85.1 (87.3%)
30.4 (31.1%)
94.5
77.3 (81.8%)
82.6 (87.4%)
31.1 (32.9%)
73.7
58.9 (79.9%)
63.9 (86.7%)
22.8 (30.9%)
91.8
72.1 (78.5%)
79.6 (86.7%)
29.3 (31.9%)
186.0
155.1 (83.4%)
168.2 (90.4%)
70.3 (37.8%)
175.2
149.6 (85.4%)
160.6 (91.7%)
68.7 (39.2%)
184.9
154.7 (83.7%)
167.1 (90.4%)
69.6 (37.6%)
175.2
146.2 (83.5%)
159.4 (91.0%)
66.1 (37.7%)
Table 1: Mean values of decisions by method, year and data set. Percentages of the upper bound, Known Wind,
are given for the other methods.
Function-based with kernel. Function-based optimization where the weights are generated by a
Gaussian kernel. Bandwidth is selected according to the ?rule of thumb? method of the np package
for R, hj = 1.06?j n?1/(4+d) , where ?j is defined as min(sd, interquartile range/1.349) [7].
Function-based with DP. Function-based optimization with Dirichlet process based weights. We
model the state distribution with the following hierarchical model,
P ? DP (?, G0 ),
TiD |?i
PiC |?i
?i |P ? P,
? von Mises(?i,D , ?D ),
TiY |?i ? von Mises(?i,Y , ?Y ),
2
? N (?i,C , ?i,C
),
2
PiS |?i ? N (?i,S , ?i,S
),
2
Wi |?i ? N (?i,W 1 , ?i,W
1 ),
2
Wi?1 |?i ? N (?i,W 2 , ?i,W
2 ),
2
2
2
2
?i = (?i,D , ?i,Y , ?i,C , ?i,C
, ?i,S , ?i,S
, ?i,W 1 , ?i,W
1 , ?i,W 2 , ?i,W 2 ).
We modeled the time of day, TiD , and year, TiY , with a von Mises distribution, an exponential family
distribution over the unit sphere; the dispersion parameters, ?D and ?Y , are hyperparameters. The
base measure was Normal-Inverse Gamma for PiC , PiS , Wi and Wi?1 and uniform for the means of
TiD and TiY . 100 posterior samples were drawn using Gibbs sampling with a collapsed sampler for
all conjugate dimensions after a 1,000 iteration burn-in and 10 iteration pulse between samples.
Pn?1
Ignore state. Sample average approximation is used, F?n (x|s) = n1 i=0 Yi+1 (x).
Results. Results are presented in Table 1. We display the value of each algorithm, along with
percentages of Known Wind for the other three methods. Both forms of function-based optimization
outperformed the algorithm in which the state variable was ignored by a large margin (?45% of the
best possible value). Dirichlet process weights outperformed kernel weights by a smaller but still
significant margin (5.6?8.2% of best possible value).
5
Discussion
We presented two new methods to solve stochastic optimization problems with an observable state
variable, including state variables that are too large for partitioning. Our methods make minimal assumptions. They are promising additions to areas that rely on observational data to make decisions
under changing conditions (energy, finance, dynamic pricing, inventory management), and some
communities that make sequential decisions under uncertainty (reinforcement learning, stochastic
programming, simulation optimization). Our methods can accommodate much larger state and decision spaces than MDPs and other table lookup methods, particularly when combined with Dirichlet
process mixture model weights. Unlike existing objective function approximation methods, such as
basis functions, our methods provide convex objective function approximations that can be used
with a variety of commercial solvers.
Acknowledgments
The research was funded in part by the Air Force Office of Scientific Research under AFOSR contract FA9550-08-1-0195, and the NSF under grant CMMI-0856153. David M. Blei is supported by
ONR 175-6343, NSF CAREER 0745520, AFOSR-09NL202 and the Alfred P. Sloan foundation.
8
References
[1] Antoniak, C. E. [1974], ?Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems?, The Annals of Statistics 2(6), 1152?1174.
[2] Bennett, K. P. and Parrado-Hern?andez, E. [2006], ?The interplay of optimization and machine
learning research?, The Journal of Machine Learning Research 7, 1265?1281.
[3] Blackwell, D. and MacQueen, J. B. [1973], ?Ferguson distributions via Polya urn schemes?,
The Annals Statistics 1(2), 353?355.
[4] Cheung, R. K. and Powell, W. B. [2000], ?SHAPE-A stochastic hybrid approximation procedure for two-stage stochastic programs?, Operations Research 48(1), 73?79.
[5] Fan, J. and Gijbels, I. [1996], Local Polynomial Modelling and Its Applications, Chapman &
Hall/CRC.
[6] Ferguson, T. S. [1973], ?A Bayesian analysis of some nonparametric problems?, The Annals of
Statistics 1(2), 209?230.
[7] Hayfield, T. and Racine, J. S. [2008], ?Nonparametric econometrics: The np package?, Journal
of Statistical Software 27(5), 1?32.
[8] Ishwaran, H. and James, L. F. [2003], ?Generalized weighted Chinese restaurant processes for
species sampling mixture models?, Statistica Sinica 13(4), 1211?1236.
[9] Kiefer, J. and Wolfowitz, J. [1952], ?Stochastic estimation of the maximum of a regression
function?, The Annals of Mathematical Statistics 23(3), 462?466.
[10] Nadaraya, E. A. [1964], ?On estimating regression?, Theory of Probability and its Applications
9(1), 141?142.
[11] Neal, R. M. [2000], ?Markov chain sampling methods for Dirichlet process mixture models?,
Journal of Computational and Graphical Statistics 9(2), 249?265.
[12] Parr, R., Painter-Wakefield, C., Li, L. and Littman, M. [2007], Analyzing feature generation for
value-function approximation, in ?Proceedings of the 24th international conference on Machine
learning?, ACM, p. 744.
[13] Pitman, J. [1996], ?Some developments of the Blackwell-MacQueen urn scheme?, Lecture
Notes-Monograph Series 30, 245?267.
[14] Powell, W. B. [2007], Approximate Dynamic Programming: Solving the curses of dimensionality, Wiley-Blackwell.
[15] Powell, W. B., Ruszczy?nski, A. and Topaloglu, H. [2004], ?Learning algorithms for separable approximations of discrete stochastic optimization problems?, Mathematics of Operations
Research 29(4), 814?836.
[16] Puterman, M. L. [1994], Markov decision processes: Discrete stochastic dynamic programming, John Wiley & Sons, Inc. New York, NY, USA.
[17] Robbins, H. and Monro, S. [1951], ?A stochastic approximation method?, The Annals of Mathematical Statistics 22(3), 400?407.
[18] Schwartz, E. S. [1997], ?The stochastic behavior of commodity prices: Implications for valuation and hedging?, The Journal of Finance 52(3), 923?973.
[19] Shapiro, A., Homem-de Mello, T. and Kim, J. [2002], ?Conditioning of convex piecewise linear
stochastic programs?, Mathematical Programming 94(1), 1?19.
[20] Spall, J. C. [2003], Introduction to stochastic search and optimization: estimation, simulation,
and control, John Wiley and Sons.
[21] Sutton, R. S. and Barto, A. G. [1998], Introduction to reinforcement learning, MIT Press Cambridge, MA, USA.
[22] Tsitsiklis, J. N. and Van Roy, B. [2001], ?Regression methods for pricing complex Americanstyle options?, IEEE Transactions on Neural Networks 12(4), 694?703.
[23] Watson, G. S. [1964], ?Smooth regression analysis?, Sankhy?a: The Indian Journal of Statistics,
Series A 26(4), 359?372.
9
| 4098 |@word polynomial:2 proportion:3 simulation:2 pulse:1 profit:1 accommodate:2 contains:2 series:3 past:1 existing:2 current:6 unction:4 si:23 dx:1 must:5 john:2 fn:2 partition:15 shape:2 treating:1 selected:2 accordingly:1 xk:5 fa9550:1 blei:3 math:1 contribute:1 location:3 simpler:2 mathematical:3 along:1 constructed:1 viable:1 hci:1 x0:2 ra:2 market:1 expected:4 behavior:2 examine:1 multi:3 manager:1 decomposed:1 election:1 curse:2 solver:4 increasing:1 becomes:3 estimating:1 moreover:3 mass:1 minimizes:1 developed:1 nj:2 guarantee:1 every:3 commodity:1 act:1 finance:3 schwartz:1 control:3 partitioning:4 unit:1 grant:1 producing:1 before:1 positive:3 tid:5 local:2 hourly:1 sd:2 sutton:1 analyzing:1 path:2 approximately:1 burn:2 nadaraya:2 range:2 averaged:1 unique:1 acknowledgment:1 testing:1 atomic:1 lost:2 implement:2 spot:5 procedure:3 powell:5 area:2 empirical:3 weather:1 projection:1 integrating:3 vnk:6 selection:1 storage:1 collapsed:2 influence:1 isotonic:3 optimize:1 restriction:1 deterministic:6 modifies:1 straightforward:1 independently:1 convex:13 estimator:3 rule:2 classic:2 annals:5 construction:1 commercial:4 heavily:1 suppose:1 duke:2 programming:5 pa:2 element:1 roy:1 expensive:1 particularly:1 econometrics:1 observed:8 bottom:2 solved:2 capture:1 region:1 ensures:1 substantial:1 monograph:1 convexity:2 covariates:1 littman:1 dynamic:5 depend:3 solving:1 basis:4 selling:1 joint:6 stock:2 tx:2 describe:1 monte:1 query:4 tell:1 outcome:6 whose:1 larger:2 solve:6 supplementary:1 otherwise:2 reconstruct:1 statistic:7 jointly:1 noisy:2 itself:1 farm:2 online:1 interplay:1 advantage:1 propose:8 product:8 commitment:5 topaloglu:1 date:1 mixing:2 kh:1 lauren:1 dirac:1 mello:1 cluster:3 p:4 produce:3 generating:2 illustrate:1 polya:3 eq:17 strong:2 longitude:2 implemented:1 c:1 solves:1 attribute:1 stochastic:31 observational:1 material:1 bin:1 crc:1 require:2 fix:1 andez:1 trimodal:1 gradientbased:2 newsvendor:10 hall:1 normal:1 exp:1 cb:2 ocation:1 parr:1 purpose:2 estimation:9 outperformed:2 applicable:1 currently:2 maker:1 sensitive:1 robbins:1 create:2 weighted:6 mit:1 gaussian:4 aim:1 rather:3 pn:2 hj:2 barto:1 office:1 seasonal:2 she:1 modelling:1 kim:1 tate:2 ferguson:2 entire:4 integrated:1 hidden:6 arg:3 denoted:1 development:1 constrained:1 integration:1 construct:7 sampling:6 manually:1 chapman:1 represents:1 sankhy:1 np:3 spall:1 piecewise:6 fundamentally:1 tiy:5 rainy:2 simultaneously:1 gamma:1 individual:1 n1:1 highly:1 interquartile:2 mixture:17 analyzed:1 unconditional:1 xb:8 chain:1 implication:1 daily:1 nown:2 minimal:1 modeling:2 maximization:1 cost:4 deviation:2 predictor:1 uniform:1 too:2 varies:1 synthetic:2 combined:1 nski:1 density:10 international:1 contract:5 quickly:1 von:3 management:3 choose:6 hn:3 american:1 derivative:1 li:1 account:2 de:1 lookup:1 north:1 inc:1 sloan:1 depends:3 ornstein:1 hedging:1 performed:1 wind:17 exogenous:1 option:1 slope:6 monro:1 painter:1 air:1 kiefer:1 variance:2 characteristic:1 efficiently:1 gathered:1 bayesian:2 thumb:2 iid:2 carlo:1 ago:1 converged:1 history:1 energy:6 james:1 associated:1 mi:3 sampled:2 dataset:1 popular:1 knowledge:1 dimensionality:2 organized:1 cj:9 back:1 higher:3 day:5 response:4 evaluated:1 xa:8 stage:1 wakefield:1 until:2 receives:1 scientific:1 pricing:3 grows:3 usa:2 neal:1 puterman:1 ind:2 generalized:1 novel:2 common:1 conditioning:1 exponentially:2 significant:1 cambridge:1 gibbs:3 automatic:1 rd:1 mathematics:1 funded:1 stable:1 longer:1 depiction:1 etc:1 base:2 posterior:9 scenario:2 certain:1 onr:1 watson:2 yi:3 minimum:3 surely:1 determine:1 maximize:2 period:2 wolfowitz:1 multiple:1 infer:1 smooth:1 offer:1 sphere:1 divided:1 manipulate:1 equally:1 regression:10 essentially:1 iteration:5 kernel:21 uhlenbeck:1 c1:1 receive:1 addition:1 want:1 separately:1 unlike:2 subject:3 db:3 bought:3 near:1 enough:1 wn:9 easy:1 variety:2 affect:2 fit:1 restaurant:1 bandwidth:6 idea:1 cn:1 whether:1 utility:1 york:1 ignored:1 generally:1 useful:1 iterating:1 amount:2 nonparametric:9 induces:1 simplest:1 shapiro:1 percentage:2 nsf:2 rb:2 alfred:1 discrete:4 promise:1 group:2 pb:2 drawn:4 changing:1 clean:1 v1:1 vast:1 sum:2 year:7 gijbels:1 run:2 package:3 parameterized:1 inverse:1 uncertainty:1 place:1 almost:1 reasonable:1 stocking:2 decide:2 family:1 decision:41 acceptable:1 bound:2 display:1 fan:1 ahead:4 constraint:4 handful:1 software:1 speed:3 min:11 urn:4 separable:6 performing:1 according:2 sunny:2 poor:1 conjugate:1 across:2 smaller:1 son:2 wi:11 appealing:1 happens:1 s1:4 retrospectively:1 restricted:2 taken:2 previously:1 hern:1 discus:3 reverting:1 know:1 serf:1 operation:3 eight:1 observe:6 hierarchical:2 ishwaran:1 enforce:1 ernel:2 stepsize:1 original:1 top:2 dirichlet:20 running:1 include:2 graphical:2 chinese:1 approximating:2 objective:13 g0:7 ruszczy:1 concentration:1 dependence:1 cmmi:1 traditional:3 gradient:27 dp:11 distance:2 valuation:1 trivial:1 assuming:1 modeled:3 nc:1 sinica:1 racine:1 ba:2 ethod:1 unknown:3 allowing:1 upper:2 observation:30 dispersion:1 markov:3 sold:3 macqueen:2 descent:1 situation:1 community:1 pic:5 david:2 introduced:1 pair:1 blackwell:3 hour:8 pattern:2 latitude:2 regime:1 program:3 including:3 max:2 natural:1 difficulty:1 rely:3 force:1 hybrid:1 scheme:5 mdps:1 created:2 naive:1 sn:30 review:1 prior:1 afosr:2 fully:1 lecture:1 generation:1 limitation:1 proportional:1 revenue:2 foundation:1 integrate:2 s0:1 pi:9 land:1 supported:1 infeasible:1 tsitsiklis:1 warren:1 face:2 taking:2 sparse:1 pitman:1 benefit:1 van:1 dimension:9 xn:16 world:1 made:4 reinforcement:3 universally:1 transaction:1 newspaper:1 correlate:2 sj:3 approximate:16 observable:10 selector:1 uni:1 bb:2 excess:1 monotonicity:1 ignore:1 active:1 sequentially:1 xi:4 continuous:3 search:2 iterative:1 parrado:1 table:3 promising:2 ca:4 career:1 unavailable:1 inventory:6 complex:2 necessarily:1 constructing:5 interpolating:1 da:3 statistica:1 s2:2 arise:1 hyperparameters:1 ny:1 wiley:3 assimilation:1 wish:1 exponential:1 weighting:5 hannah:1 dk:1 intractable:1 incorporating:1 sequential:1 effectively:1 ci:8 conditioned:3 budget:1 sparseness:1 demand:10 forecast:2 margin:2 durham:1 easier:2 antoniak:1 likely:1 fnk:6 ordered:2 scalar:1 determines:1 relies:1 acm:1 ma:1 conditional:6 goal:3 viewed:2 cheung:1 price:11 bennett:1 hard:1 change:1 included:2 infinite:1 averaging:1 sampler:2 called:2 specie:1 indian:1 princeton:6 tested:1 |
3,421 | 4,099 | Sparse Inverse Covariance Selection via
Alternating Linearization Methods
Katya Scheinberg
Department of ISE
Lehigh University
[email protected]
Shiqian Ma, Donald Goldfarb
Department of IEOR
Columbia University
{sm2756,goldfarb}@columbia.edu
Abstract
Gaussian graphical models are of great interest in statistical learning. Because the
conditional independencies between different nodes correspond to zero entries in
the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample
data, by solving a convex maximum likelihood problem with an ?1 -regularization
term. In this paper, we propose a first-order method based on an alternating linearization technique that exploits the problem?s special structure; in particular, the
subproblems solved in each iteration have closed-form solutions. Moreover, our
algorithm obtains an ?-optimal solution in O(1/?) iterations. Numerical experiments on both synthetic and real data from gene association networks show that a
practical version of this algorithm outperforms other competitive algorithms.
1
Introduction
In multivariate data analysis, graphical models such as Gaussian Markov Random Fields provide a way to discover meaningful interactions among variables. Let Y = {y (1) , . . . , y (n) } be
an n-dimensional random vector following an n-variate Gaussian distribution N (?, ?), and let
G = (V, E) be a Markov network representing the conditional independence structure of N (?, ?).
Specifically, the set of vertices V = {1, . . . , n} corresponds to the set of variables in Y , and the
edge set E contains an edge (i, j) if and only if y (i) is conditionally dependent on y (j) given all
remaining variables; i.e., the lack of an edge between i and j denotes the conditional independence of y (i) and y (j) , which corresponds to a zero entry in the inverse covariance matrix ??1
([1]). Thus learning the structure of this graphical model is equivalent to the problem of learning the
zero-pattern of ??1 . To estimate this sparse inverse covariance matrix, one can solve the following
? X? ? ??X?0 ,
n
sparse inverse covariance selection (SICS) problem: maxX?S++
log det(X) ? ??,
n
where S++ denotes the set of n ? n positive definite matrices, ?X?0 is the number of nonzeros in
? i ? ?)
? ? is the sample covariance matrix, ?? = 1 ?p Yi is the sample
? = 1 ?p (Yi ? ?)(Y
X, ?
i=1
i=1
p
p
mean and Yi is the i-th random sample of Y . This problem is NP-hard in general due to the combinatorial nature of the cardinality term ??X?0 ([2]).?To get a numerically tractable problem, one
can replace the cardinality term ?X?0 by ?X?1 := i,j |Xij |, the envelope of ?X?0 over the set
{X ? Rn?n : ?X?? ? 1} (see [3]). This results in the convex optimization problem (see e.g.,
[4, 5, 6, 7]):
min
n
X?S++
? X? + ??X?1 .
? log det(X) + ??,
(1)
? + U, X?, where ?U ??
n
Note that (1) can be rewritten as minX?S++
max?U ?? ?? ? log det X + ??
is the largest absolute value of the entries of U . By exchanging the order of max and min, we obtain
1
? + U, X?, which is equivalent to
n
the dual problem max?U ?? ?? minX?S++
? log det X + ??
? ? ? ?}.
max {log det W + n : ?W ? ??
n
W ?S++
(2)
Both the primal and dual problems have strictly convex objectives; hence, their optimal solutions
are unique. Given a dual solution W , X = W ?1 is primal feasible resulting in the duality gap
? W ?1 ? + ??W ?1 ?1 ? n.
gap := ??,
(3)
The primal and the dual SICS problems (1) and (2) are semidefinite programming problems and can
be solved via interior point methods (IPMs) in polynomial time. However, the per-iteration computational cost and memory requirements of an IPM are prohibitively high for the SICS problem.
Although an approximate IPM has recently been proposed for the SICS problem [8], most of the
methods developed for it are first-order methods. Banerjee et al. [7] proposed a block coordinate
descent (BCD) method to solve the dual problem (2). Their method updates one row and one column
of W in each iteration by solving a convex quadratic programming problem by an IPM. The glasso
method of Friedman et al. [5] is based on the same BCD approach as in [7], but it solves each subproblem as a LASSO problem by yet another coordinate descent (CD) method [9]. Sun et al. [10]
proposed solving the primal problem (1) by using a BCD method. They formulate the subproblem
as a min-max problem and solve it using a prox method proposed by Nemirovski [11]. The SINCO
method proposed by Scheinberg and Rish [12] is a greedy CD method applied to the primal problem.
All of these BCD and CD approaches lack iteration complexity bounds. They also have been shown
to be inferior in practice to gradient based approaches. A projected gradient method for solving
the dual problem (2) that is considered to be state-of-the-art for SICS was proposed by Duchi et al.
[13]. However, there are no iteration complexity results for it either. Variants of Nesterov?s method
[14, 15] have been applied to solve the SICS problem. d?Aspremont et al. [16] applied Nesterov?s
optimal first-order method to solve the primal problem (1) after smoothing the nonsmooth ?1 term,
obtaining an iteration complexity bound of O(1/?) for an ?-optimal solution, but the implementation
in [16] was very slow and did not produce good results. Lu [17] solved the dual problem (2), which
?
is a smooth problem, by Nesterov?s algorithm, and improved the iteration complexity to O(1/ ?).
However, since the practical performance of this algorithm was not attractive, Lu gave a variant
(VSM) of it that exhibited better performance. The iteration complexity of VSM is unknown. Yuan
[18] proposed an alternating direction method based on an augmented Lagrangian framework (see
the ADAL method (8) below). This method also lacks complexity results. The proximal point algorithm proposed by Wang et al. in [19] requires a reformulation of the problem that increases the size
of the problem making it impractical for solving large-scale problems. Also, there is no iteration
complexity bound for this algorithm. The IPM in [8] also requires such a reformulation.
Our contribution. In this paper, we propose an alternating linearization method (ALM) for solving
the primal SICS problem. An advantage of solving the primal problem is that the ?1 penalty term in
the objective function directly promotes sparsity in the optimal inverse covariance matrix.
Although developed independently, our method is closely related to Yuan?s method [18]. Both
methods exploit the special form of the primal problem (1) by alternatingly minimizing one of
the terms of the objective function plus an approximation to the other term. The main difference
between the two methods is in the construction of these approximations. As we will show, our
method has a theoretically justified interpretation and is based on an algorithmic framework with
complexity bounds, while no complexity bound is available for Yuan?s method. Also our method
has an intuitive interpretation from a learning perspective. Extensive numerical test results on both
synthetic data and real problems have shown that our ALM algorithm significantly outperforms
other existing algorithms, such as the PSM algorithm proposed by Duchi et al. [13] and the VSM
algorithm proposed by Lu [17]. Note that it is shown in [13] and [17] that PSM and VSM outperform
the BCD method in [7] and glasso in [5].
Organization of the paper. In Section 2 we briefly review alternating linearization methods for
minimizing the sum of two convex functions and establish convergence and iteration complexity
results. We show how to use ALM to solve SICS problems and give intuition from a learning
perspective in Section 3. Finally, we present some numerical results on both synthetic and real data
in Section 4 and compare ALM with PSM algorithm [13] and VSM algorithm [17].
2
2
Alternating Linearization Methods
We consider here the alternating linearization method (ALM) for solving the following problem:
min
F (x) ? f (x) + g(x),
(4)
where f and g are both convex functions. An effective way to solve (4) is to ?split? f and g by
introducing a new variable, i.e., to rewrite (4) as
min{f (x) + g(y) : x ? y = 0},
x,y
(5)
and apply an alternating direction augmented Lagrangian method to it. Given a penalty parameter
1/?, at the k-th iteration, the augmented Lagrangian method minimizes the augmented Lagrangian
function
1
?x ? y?22 ,
L(x, y; ?) := f (x) + g(y) ? ??, x ? y? +
2?
with respect to x and y, i.e., it solves the subproblem
(xk , y k ) := arg min L(x, y; ?k ),
x,y
(6)
and updates the Lagrange multiplier ? via:
?k+1 := ?k ? (xk ? y k )/?.
(7)
Since minimizing L(x, y; ?) with respect to x and y jointly is usually difficult, while doing so with
respect to x and y alternatingly can often be done efficiently, the following alternating direction
version of the augmented Lagrangian method (ADAL) is often advocated (see, e.g., [20, 21]):
?
? xk+1 := arg minx L(x, y k ; ?k )
(8)
y k+1 := arg miny L(xk+1 , y; ?k )
? k+1
?
:= ?k ? (xk+1 ? y k+1 )/?.
If we also update ? after we solve the subproblem with respect to x, we get the following symmetric
version of the ADAL method.
? k+1
minx L(x, y k ; ?ky )
?
? xk+1 := arg
?
k
?x
:= ?y ? (xk+1 ? y k )/?
(9)
k+1
y
:= arg miny L(xk+1 , y; ?k+1
)
?
x
?
? k+1
? (xk+1 ? y k+1 )/?.
:= ?k+1
?y
x
Algorithm (9) has certain theoretical advantages when f and g are smooth. In this case, from the
first-order optimality conditions for the two subproblems in (9), we have that:
?k+1
= ?f (xk+1 ) and
x
?k+1
= ??g(y k+1 ).
y
(10)
Substituting these relations into (9), we obtain the following equivalent algorithm for solving (4),
which we refer to as the alternating linearization minimization (ALM) algorithm.
Algorithm 1 Alternating linearization method (ALM) for smooth problem
Input: x0 = y 0
for k = 0, 1, ? ? ? do
?
?
1
1. Solve xk+1 := arg minx Qg (x, y k ) ? f (x) + g(y k ) + ?g(y k ), x ? y k + 2?
?x ? y k ?22 ;
?
?
1
2. Solve y k+1 := arg miny Qf (xk+1 , y) ? f (xk+1 ) + ?f (xk+1 ), y ? xk+1 + 2?
?y ?
k+1 2
x
?2 + g(y);
end for
Algorithm 1 can be viewed in the following way: at each iteration we construct a quadratic approximation of the function g(x) at the current iterate y k and minimize the sum of this approximation and
f (x). The approximation is based on linearizing g(x) (hence the name ALM) and adding a ?prox?
1
?x ? y k ?22 . When ? is small enough (? ? 1/L(g), where L(g) is the Lipschitz constant for
term 2?
3
?
? 1
?g) this quadratic function, g(y k ) + ?g(y k ), x ? y k + 2?
?x ? y k ?22 is an upper approximation to
g(x), which means that the reduction in the value of F (x) achieved by minimizing Qg (x, y k ) in Step
1 is not smaller than the reduction achieved in the value of? Qg (x, y k ) itself. Similarly,
2 we
? 1 in Step
build an upper approximation to f (x) at xk+1 , f (xk+1 )+ ?f (xk+1 ), y ? xk+1 + 2?
?y ?xk+1 ?22 ,
and minimize the sum Qf (xk+1 , y) of it and g(y).
Let us now assume that f (x) is in the class C 1,1 with Lipschitz constant L(f ), while g(x) is simply
convex. Then from the first-order optimality conditions for the second minimization in (9), we have
??k+1
? ?g(y k+1 ), the subdifferential of g(y) at y = y k+1 . Hence, replacing ?g(y k ) in the
y
definition of Qg (x, y k ) by ??k+1
in (9), we obtain the following modified version of (9).
y
Algorithm 2 Alternating linearization method with skipping step
Input: x0 = y 0
for k = 0, 1, ? ? ? do
?
?
1. Solve xk+1 := arg minx Q(x, y k ) ? f (x) + g(y k ) ? ?k , x ? y k +
2. If F (xk+1 ) > Q(xk+1 , y k ) then xk+1 := y k .
3. Solve y k+1 := arg miny Qf (xk+1 , y);
4. ?k+1 = ?f (xk+1 ) ? (xk+1 ? y k+1 )/?.
end for
1
2? ?x
? y k ?22 ;
Algorithm 2 is identical to the symmetric ADAL algorithm (9) as long as F (xk+1 ) ? Q(xk+1 , y k )
at each iteration (and to Algorithm 1 if g(x) is in C 1,1 and ? ? 1/ max{L(f ), L(g)}). If this condition fails, then the algorithm simply sets xk+1 ? y k . Algorithm 2 has the following convergence
property and iteration complexity bound. For a proof see the Appendix.
Theorem 2.1. Assume ?f is Lipschitz continuous with constant L(f ). For ?/L(f ) ? ? ? 1/L(f )
where 0 < ? ? 1, Algorithm 2 satisfies
F (y k ) ? F (x? ) ?
?x0 ? x? ?2
, ?k,
2?(k + kn )
(11)
where x? is an optimal solution of (4) and kn is the number of iterations until the k ? th for which
F (xk+1 ) ? Q(xk+1 , y k ). Thus Algorithm 2 produces a sequence which converges to the optimal
solution in function value, and the number of iterations needed is O(1/?) for an ?-optimal solution.
If g(x) is also a smooth function in the class C 1,1 with Lipschitz constant L(g) ? 1/?, then Theorem
2.1 also applies to Algorithm 1 since in this case kn = k (i.e., no ?skipping? occurs). Note that the
iteration complexity bound in Theorem 2.1 can be improved.
Nesterov [15, 22] proved that one can
?
obtain an optimal iteration complexity bound of O(1/ ?), using only first-order information. His
acceleration technique is based on using a linear combination of previous iterates to obtain a point
where the approximation is built. This technique has been exploited and extended by Tseng [23],
Beck and Teboulle [24], Goldfarb et al. [25] and many others. A similar technique can ?
be adopted
to derive a fast version of Algorithm 2 that has an improved complexity bound of O(1/ ?), while
keeping the computational effort in each iteration almost unchanged. However, we do not present
this method here, since when applied to the SICS problem, it did not work as well as Algorithm 2.
3
ALM for SICS
The SICS problem
min
n
X?S++
F (X) ? f (X) + g(X),
(12)
? X? and g(X) = ??X?1 , is of the same form as (4). However,
where f (X) = ? log det(X) + ??,
in this case neither f (X) nor g(X) have Lipschitz continuous gradients. Moreover, f (X) is only
defined for positive definite matrices while g(X) is defined everywhere. These properties of the
objective function make the SICS problem especially challenging for optimization methods. Nevertheless, we can still apply (9) to solve the problem directly. Moreover, we can apply Algorithm 2
and obtain the complexity bound in Theorem 2.1 as follows.
4
n
The log det(X) term in f (X) implicitly requires that X ? S++
and the gradient of f (X), which
?1
n
?
is given by ?X + ?, is not Lipschitz continuous in S++ . Fortunately, as proved in Proposition
1
3.1 in [17], the optimal solution of (12) X ? ? ?I, where ? = ???+n?
. Therefore, if we define
?
?
n
C := {X ? S : X ? 2 I}, the SICS problem (12) can be formulated as:
min{f (X) + g(Y ) : X ? Y = 0, X ? C, Y ? C}.
X,Y
(13)
We can include constraints X ? C in Step 1 and Y ? C in Step 3 of Algorithm 2. Theorem 2.1
can then be applied as discussed in [25]. However, a difficulty now arises when performing the
minimization in Y . Without the constraint Y ? C, only a matrix shrinkage operation is needed,
but with this additional constraint the problem becomes harder to solve. Minimization in X with or
without the constraint X ? C is accomplished by performing an SVD. Hence the constraint can be
easily imposed.
Instead of imposing constraint Y ? C we can obtain feasible solutions by a line search on ?. We
know that the constraint X ? ?2 I is not tight at the solution. Hence if we start the algorithm with
X ? ?I and restrict the step size ? to be sufficiently small then the iterates of the method will
remain in C.
Note however, that the bound on the Lipschitz constant of the gradient of f (X) is 1/?2 and hence
can be very large. It is not practical to restrict ? in the algorithm to be smaller than ?2 , since ?
determines the step size at each iteration. Hence, for a practical approach we can only claim that the
theoretical convergence rate bound holds in only a small neighborhood of the optimal solution. We
now present a practical version of our algorithm applied to the SICS problem.
Algorithm 3 Alternating linearization method (ALM) for SICS
Input: X 0 = Y 0 , ?0 .
for k = 0, 1, ? ? ? do
0. Pick ?k+1 ? ?k .
1
1. Solve X k+1 := arg minX?C f (X) + g(Y k ) ? ??k , X ? Y k ? + 2?k+1
?X ? Y k ?2F ;
1
k+1
k
k
k+1
k
k+1
k 2
2. If g(X
) > g(Y ) ? ?? , X
? Y ? + 2?k+1 ?X
? Y ?F , then X k+1 := Y k .
1
3. Solve Y k+1 := arg minY f (X k+1 ) + ??f (X k+1 ), Y ? X k+1 ? + 2?k+1
?Y ? X k+1 ?2F +
g(Y );
4. ?k+1 = ?f (X k+1 ) ? (X k+1 ? Y k+1 )/?k+1 .
end for
We now show how to solve the two optimization problems in Algorithm 3. The first-order optimality
conditions for Step 1 in Algorithm 3, ignoring the constraint X ? C are:
?f (X) ? ?k + (X ? Y k )/?k+1 = 0.
? and let
Consider V Diag(d)V ? - the spectral decomposition of Y k + ?k+1 (?k ? ?)
(
)
?
?i = di + d2i + 4?k+1 /2, i = 1, . . . , n.
(14)
(15)
? it is easy to verify that X k+1 := V Diag(?)V ? satisfies (14). When
Since ?f (X) = ?X ?1 + ?,
k+1
the constraint
? C is imposed, the )optimal
:= V Diag(?)V ? with
} solution changes to X
{ X (
?
?i = max ?/2, di + d2i + 4?k+1 /2 , i = 1, . . . , n. We observe that solving (14) requires
approximately the same effort (O(n3 )) as is required to compute ?f (X k+1 ). Moreover, from the
solution to (14), ?f (X k+1 ) is obtained with only a negligible amount of additional effort, since
(X k+1 )?1 := V Diag(?)?1 V ? .
The first-order optimality conditions for Step 2 in Algorithm 3 are:
0 ? ?f (X k+1 ) + (Y ? X k+1 )/?k+1 + ?g(Y ).
Since g(Y ) = ??Y ?1 , it is well known that the solution to (16) is given by
? ? (X k+1 )?1 ), ?k+1 ?),
Y k+1 = shrink(X k+1 ? ?k+1 (?
5
(16)
where the ?shrinkage operator? shrink(Z, ?) updates each element Zij of the matrix Z by the formula shrink(Z, ?)ij = sgn(Zij ) ? max{|Zij | ? ?, 0}.
The O(n3 ) complexity of Step 1, which requires a spectral decomposition, dominates the O(n2 )
complexity of Step 2 which requires a simple shrinkage. There is no closed-form solution for the
subproblem corresponding to Y when the constraint Y ? C is imposed. Hence, we neither impose
this constraint explicitly nor do so by a line search on ?k , since in practice this degrades the performance of the algorithm substantially. Thus, the resulting iterates Y k may not be positive definite,
while the iterates X k remain so. Eventually due to the convergence of Y k and X k , the Y k iterates
become positive definite and the constraint Y ? C is satisfied.
Let us now remark on the learning based intuition behind Algorithm 3. We recall that ??k ?
?g(Y k ). The two steps of the algorithm can be written as
1
?X ? (Y k + ?k+1 ?k )?2F }
(17)
X k+1 := arg min{f (X) +
X?C
2?k+1
and
1
? ? (X k+1 )?1 ))?2F }.
Y k+1 := arg min{g(Y ) +
?Y ? (X k+1 ? ?k+1 (?
(18)
Y
2?k+1
The SICS problem is trying to optimize two conflicting objectives: on the one hand it tries to find a
? as possible, and on the
covariance matrix X ?1 that best fits the observed data, i.e., is as close to ?
other hand it tries to obtain a sparse matrix X. The proposed algorithm address these two objectives
in an alternating manner. Given an initial ?guess? of the sparse matrix Y k we update this guess
by a subgradient descent step of length ?k+1 : Y k + ?k+1 ?k . Recall that ??k ? ?g(Y k ). Then
problem (17) seeks a solution X that optimizes the first objective (best fit of the data) while adding
a regularization term which imposes a Gaussian prior on X whose mean is the current guess for the
sparse matrix: Y k + ?k+1 ?k . The solution to (17) gives us a guess for the inverse covariance X k+1 .
? ? (X k+1 )?1 ). Then problem
We again update it by taking a gradient descent step: X k+1 ? ?k+1 (?
(18) seeks a sparse solution Y while also imposing a Gaussian prior on Y whose mean is the guess
? ? (X k+1 )?1 ). Hence the sequence of X k ?s is
for the inverse covariance matrix X k+1 ? ?k+1 (?
a sequence of positive definite inverse covariance matrices that converge to a sparse matrix, while
the sequence of Y k ?s is a sequence of sparse matrices that converges to a positive definite inverse
covariance matrix.
An important question is how to pick ?k+1 . Theory tells us that if we pick a small enough value,
then we can obtain the complexity bounds. However, in practice this value is too small. We discuss
the simple strategy that we use in the next section.
4 Numerical Experiments
In this section, we present numerical results on both synthetic and real data to demonstrate the
efficiency of our SICS ALM algorithm. Our codes for ALM were written in MATLAB. All numerical experiments were run in MATLAB 7.3.0 on a Dell Precision 670 workstation with an Intel
Xeon(TM) 3.4GHZ CPU and 6GB of RAM.
? ? ?k is a feasible solution to the dual problem (2) as
Since ??k ? ?g(Y k ), ??k ?? ? ?; hence ?
long as it is positive definite. Thus the duality gap at the k-th iteration is given by:
? X k ? + ??X k ?1 ? log det(?
? ? ?k ) ? n.
Dgap := ? log det(X k ) + ??,
(19)
We define the relative duality gap as: Rel.gap := Dgap/(1 + |pobj| + |dobj|), where pobj and dobj
are respectively the objective function values of the primal problem (12) at point X k , and the dual
? ? ?k . Defining dk (?(x)) ? max{1, ?(xk ), ?(xk?1 )}, we measure the relative
problem (2) at ?
changes of objective function value F (X) and the iterates X and Y as follows:
F rel :=
|F (X k ) ? F (X k?1 )|
?X k ? X k?1 ?F
?Y k ? Y k?1 ?F
, Xrel :=
, Y rel :=
.
dk (|F (X)|)
dk (?X?F )
d(?Y ?F )
We terminate ALM when either
(i) Dgap ? ?gap
or
(ii)
max{F rel, Xrel, Y rel} ? ?rel .
6
(20)
Note that in (19), computing log det(X k ) is easy since the spectral decomposition of X k is already
? ? ?k ) requires another expensive spectral
available (see (14) and (15)), but computing log det(?
decomposition. Thus, in practice, we only check (20)(i) every Ngap iterations. We check (20)(ii) at
every iteration since this is inexpensive.
A continuation strategy for updating ? is also crucial to ALM. In our experiments, we adopted the
following update rule. After every N? iterations, we set ? := max{? ? ?? , ?
?}; i.e., we simply reduce
? by a constant factor ?? every N? iterations until a desired lower bound on ? is achieved.
We compare ALM (i.e., Algorithm 3 with the above stopping criteria and ? updates), with the
projected subgradient method (PSM) proposed by Duchi et al. in [13] and implemented by Mark
Schmidt 1 and the smoothing method (VSM) 2 proposed by Lu in [17], which are considered to be
the state-of-the-art algorithms for solving SICS problems. The per-iteration complexity of all three
algorithms is roughly the same; hence a comparison of the number of iterations is meaningful. The
parameters used in PSM and VSM are set at their default values. We used the following parameter
values in ALM: ?gap = 10?3 , ?rel = 10?8 , Ngap = 20, N? = 20, ?
? = max{?0 ??8 , 10?6 }, ?? =
1/3, where ?0 is the initial ? which is set according to ?; specifically, in our experiments, ?0 =
100/?, if ? < 0.5, ?0 = ? if 0.5 ? ? ? 10, and ?0 = ?/100 if ? > 10.
4.1
Experiments on synthetic data
We randomly created test problems using a procedure proposed by Scheinberg and Rish in [12].
Similar procedures were used by Wang et al. in [19] and Li and Toh in [8]. For a given dimension n,
we first created a sparse matrix U ? Rn?n with nonzero entries equal to -1 or 1 with equal probability. Then we computed S := (U ? U ? )?1 as the true covariance matrix. Hence, S ?1 was sparse.
We then drew p = 5n iid vectors, Y1 , . . . , Yp , from the Gaussian distribution N (0, S) by using the
? := 1 ?p Yi Y ? .
mvnrnd function in MATLAB, and computed a sample covariance matrix ?
i
i=1
p
We compared ALM with PSM [13] and VSM [17] on these randomly created data with different
?. The PSM code was terminated using its default stopping criteria, which included (20)(i) with
?gap = 10?3 . VSM was also terminated when Dgap ? 10?3 . Since PSM and VSM solve the
dual problem (2), the duality gap which is given by (3) is available without any additional spectral
decompositions. The results are shown in Table 1. All CPU times reported are in seconds.
Table 1: Comparison of ALM, PSM and VSM on synthetic data
ALM
Rel.gap
CPU
iter
8.70e-4
5.55e-4
9.92e-4
1.73e-3
6.13e-5
1.51e-6
4.10e-7
3.91e-7
4.86e-7
1.35e-8
13
84
433
1405
3110
1682
861
292
419
349
140
100
100
140
160
9.80e-4
1.69e-4
9.28e-4
2.17e-4
4.70e-4
1.15e-6
7.59e-8
2.12e-7
3.39e-8
5.60e-8
6
39
247
1014
2529
6106
903
489
746
613
180
140
160
180
240
4.63e-4
4.14e-4
3.19e-4
8.28e-4
9.58e-4
4.63e-7
1.56e-7
6.07e-8
1.07e-7
9.37e-8
8
55
394
1304
3794
7536
2099
774
1088
1158
n
iter
Dgap
200
500
1000
1500
2000
300
220
180
199
200
200
500
1000
1500
2000
200
500
1000
1500
2000
PSM
Dgap
Rel.gap
? = 0.1
9.99e-4
1.74e-6
9.98e-4
7.38e-7
9.91e-4
3.91e-7
9.76e-4
2.74e-7
1.12e-3
2.46e-7
? = 0.5
1.00e-3
1.18e-6
9.90e-4
4.46e-7
9.80e-4
2.24e-7
9.96e-4
1.55e-7
9.96e-4
1.18e-7
? = 1.0
1.00e-3
1.00e-6
9.96e-4
3.76e-7
9.83e-4
1.87e-7
9.88e-4
1.27e-7
9.35e-4
9.15e-8
VSM
Rel.gap
CPU
9.97e-4
9.98e-4
9.97e-4
9.98e-4
1.00e-3
1.73e-6
7.38e-7
3.94e-7
2.80e-7
2.20e-7
37
377
1928
6340
16085
1000
1067
1039
1191
1640
9.99e-4
9.99e-4
9.95e-4
9.96e-4
9.99e-4
1.18e-6
4.50e-7
2.27e-7
1.55e-7
1.19e-7
43
425
2709
9405
28779
1296
1015
1310
1484
2132
9.96e-4
9.97e-4
9.97e-4
9.96e-4
9.99e-4
9.96e-7
3.76e-7
1.90e-7
1.28e-7
9.77e-8
57
406
3426
11749
37406
CPU
iter
Dgap
38
205
446
1975
3759
857
946
741
802
915
137
212
749
3514
6519
171
495
1172
5100
12310
From Table 1 we see that on these randomly created SICS problems, ALM outperforms PSM and
VSM in both accuracy and CPU time with the performance gap increasing as ? increases. For
example, for ? = 1.0 and n = 2000, ALM achieves Dgap = 9.58e ? 4 in about 1 hour and 15
minutes, while PSM and VSM need about 3 hours and 25 minutes and 10 hours and 23 minutes,
respectively, to achieve similar accuracy.
1
2
The MATLAB can be downloaded from http://www.cs.ubc.ca/?schmidtm/Software/PQN.html
The MATLAB code can be downloaded from http://www.math.sfu.ca/?zhaosong
7
4.2
Experiments on real data
We tested ALM on real data from gene expression networks using the five data sets from [8] provided
to us by Kim-Chuan Toh: (1) Lymph node status; (2) Estrogen receptor; (3) Arabidopsis thaliana;
(4) Leukemia; (5) Hereditary breast cancer. See [8] and references therein for the descriptions of
these data sets. Table 2 presents our test results. As suggested in [8], we set ? = 0.5. From Table 2
we see that ALM is much faster and provided more accurate solutions than PSM and VSM.
Table 2: Comparison of ALM, PSM and VSM on real data
prob.
(1)
(2)
(3)
(4)
(5)
4.3
n
587
692
834
1255
1869
iter
60
80
100
120
160
Dgap
9.41e-6
6.13e-5
7.26e-5
6.69e-4
5.59e-4
ALM
Rel.gap
5.78e-9
3.32e-8
3.27e-8
1.97e-7
1.18e-7
CPU
35
73
150
549
2158
iter
178
969
723
1405
1639
Dgap
9.22e-4
9.94e-4
1.00e-3
9.89e-4
9.96e-4
PSM
Rel.gap
5.67e-7
5.38e-7
4.50e-7
2.91e-7
2.10e-7
CPU
64
531
662
4041
14505
iter
467
953
1097
1740
3587
Dgap
9.78e-4
9.52e-4
7.31e-4
9.36e-4
9.93e-4
VSM
Rel.gap
6.01e-7
5.16e-7
3.30e-7
2.76e-7
2.09e-7
CPU
273
884
1668
8568
52978
Solution Sparsity
In this section, we compare the sparsity patterns of the solutions produced by ALM, PSM and VSM.
For ALM, the sparsity of the solution is given by the sparsity of Y . Since PSM and VSM solve
the dual problem, the primal solution X, obtained by inverting the dual solution W , is never sparse
due to floating point errors. Thus it is not fair to measure the sparsity of X or a truncated version
of X. Instead, we measure the sparsity of solutions produced by PSM and VSM by appealing
to complementary slackness. Specifically, the (i, j)-th element of the inverse covariance matrix
? ij | = ?. We give results for a random problem
is deemed to be nonzero if and only if |Wij ? ?
(n = 500) and the first real data set in Table 3. For each value of ?, the first three rows show
the number of nonzeros in the solution and the last three rows show the number of entries that are
nonzero in the solution produced by one of the methods but are zero in the solution produced by
the other method. The sparsity of the ground truth inverse covariance matrix of the synthetic data
is 6.76%. From Table 3 we can see that when ? is relatively large (? ? 0.5), all three algorithms
Table 3: Comparison of sparsity of solutions produced by ALM, PSM and VSM
?
100
50
ALM
PSM
VSM
ALM vs PSM
PSM vs VSM
VSM vs ALM
700
700
700
0
0
0
2810
2810
2810
0
0
0
ALM
PSM
VSM
ALM vs PSM
PSM vs VSM
VSM vs ALM
587
587
587
0
0
0
587
587
587
0
0
0
10
5
1
synthetic problem data
11844
15324
28758
11844
15324
28758
11844
15324
28758
0
0
0
0
0
0
0
0
0
real problem data
587
587
587
587
587
587
587
587
587
0
0
0
0
0
0
0
0
0
0.5
0.1
0.05
0.01
37510
37510
37510
0
0
0
63000
63000
63000
0
0
0
75566
75566
75568
2
0
2
106882
106870
106876
14
8
2
4617
4617
4617
0
0
0
37613
37615
37613
0
2
0
65959
65957
65959
2
0
0
142053
142051
142051
2
0
0
produce solutions with exactly the same sparsity patterns. Only when ? is very small, are there slight
differences. We note that the ROC curves depicting the trade-off between the number of true positive
elements recovered versus the number of false positive elements as a function of the regularization
parameter ? are also almost identical for the three methods.
Acknowledgements
We would like to thank Professor Kim-Chuan Toh for providing the data set used in Section 4.2. The
research reported here was supported in part by NSF Grants DMS 06-06712 and DMS 10-16571,
ONR Grant N00014-08-1-1118 and DOE Grant DE-FG02-08ER25856.
8
References
[1] S. Lauritzen. Graphical Models. Oxford University Press, 1996.
[2] B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24:227?
234, 1995.
[3] J.-B. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms II: Advanced
Theory and Bundle Methods. Springer-Verlag, New York, 1993.
[4] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika,
94(1):19?35, 2007.
[5] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 2007.
[6] M. Wainwright, P. Ravikumar, and J. Lafferty. High-dimensional graphical model selection using ?1 regularized logistic regression. NIPS, 19:1465?1472, 2007.
[7] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation for multivariate gaussian for binary data. Journal of Machine Learning Research, 9:485?516,
2008.
[8] L. Li and K.-C. Toh. An inexact interior point method for l1 -regularized sparse covariance selection.
preprint, 2010.
[9] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58(1):267?288,
1996.
[10] L. Sun, R. Patel, J. Liu, K. Chen, T. Wu, J. Li, E. Reiman, and J. Ye. Mining brain region connectivity for
alzheimer?s disease study via sparse inverse covariance estimation. KDD?09, 2009.
[11] A. Nemirovski. Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz
continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on
Optimization, 15(1):229?251, 2005.
[12] K. Scheinberg and I. Rish.
Sinco - a greedy coordinate ascent method for sparse inverse covariance selection problem.
2009.
Preprint available at http://www.optimizationonline.org/DB HTML/2009/07/2359.html.
[13] J. Duchi, S. Gould, and D. Koller. Projected subgradient methods for learning sparse Gaussian. Conference on Uncertainty in Artificial Intelligence (UAI 2008), 2008.
[14] Y. E. Nesterov. Smooth minimization for non-smooth functions. Math. Program. Ser. A, 103:127?152,
2005.
[15] Y. E. Nesterov. Introductory lectures on convex optimization. 87:xviii+236, 2004. A basic course.
[16] A. D?Aspremont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection.
SIAM Journal on Matrix Analysis and its Applications, 30(1):56?66, 2008.
[17] Z. Lu. Smooth optimization approach for sparse covariance selection. SIAM J. Optim., 19(4):1807?1827,
2009.
[18] X. Yuan. Alternating direction methods for sparse covariance selection. 2009. Preprint available at
http://www.optimization-online.org/DB HTML/2009/09/2390.html.
[19] C. Wang, D. Sun, and K.-C. Toh. Solving log-determinant optimization problems by a Newton-CG primal
proximal point algorithm. preprint, 2009.
[20] M. Fortin and R. Glowinski. Augmented Lagrangian methods: applications to the numerical solution of
boundary-value problems. North-Holland Pub. Co., 1983.
[21] R. Glowinski and P. Le Tallec. Augmented Lagrangian and Operator-Splitting Methods in Nonlinear
Mechanics. SIAM, Philadelphia, Pennsylvania, 1989.
[22] Y. E. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence
O(1/k2 ). Dokl. Akad. Nauk SSSR, 269:543?547, 1983.
[23] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM
J. Optim., 2008.
[24] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Imaging Sciences, 2(1):183?202, 2009.
[25] D. Goldfarb, S. Ma, and K. Scheinberg. Fast alternating linearization methods for minimizing the sum of
two convex functions. Technical report, Department of IEOR, Columbia University, 2010.
9
| 4099 |@word determinant:1 briefly:1 version:7 polynomial:1 seek:2 covariance:24 decomposition:5 pick:3 harder:1 ipm:4 reduction:2 initial:2 liu:1 contains:1 zij:3 pub:1 outperforms:3 existing:1 rish:3 current:2 recovered:1 skipping:2 optim:2 toh:5 yet:1 written:2 numerical:7 kdd:1 update:8 v:6 greedy:2 intelligence:1 guess:5 xk:35 adal:4 iterates:6 math:2 node:2 org:2 dell:1 five:1 become:1 yuan:5 introductory:1 manner:1 theoretically:1 x0:3 alm:34 roughly:1 nor:2 mechanic:1 brain:1 cpu:9 cardinality:2 increasing:1 becomes:1 provided:2 estimating:1 moreover:4 discover:1 biostatistics:1 minimizes:1 substantially:1 developed:2 impractical:1 every:4 concave:2 exactly:1 prohibitively:1 biometrika:1 k2:1 ser:1 arabidopsis:1 grant:3 positive:9 negligible:1 receptor:1 oxford:1 approximately:1 plus:1 katya:1 therein:1 challenging:1 co:1 nemirovski:2 practical:5 unique:1 practice:4 block:1 definite:7 procedure:2 maxx:1 significantly:1 donald:1 get:2 interior:2 selection:11 operator:3 close:1 optimize:1 equivalent:3 imposed:3 lagrangian:7 www:4 independently:1 convex:13 formulate:1 splitting:1 rule:1 his:1 coordinate:3 construction:1 programming:2 element:4 expensive:1 natarajan:1 updating:1 observed:1 subproblem:5 preprint:4 solved:3 wang:3 region:1 sun:3 trade:1 disease:1 intuition:2 complexity:19 miny:5 nesterov:7 d2i:2 solving:12 rewrite:1 tight:1 efficiency:1 easily:1 fast:3 effective:1 artificial:1 tell:1 neighborhood:1 ise:1 whose:2 solve:19 jointly:1 itself:1 online:1 advantage:2 sequence:5 propose:2 hiriart:1 interaction:1 achieve:1 nauk:1 intuitive:1 description:1 ky:1 convergence:6 requirement:1 produce:3 converges:2 derive:1 ij:2 lauritzen:1 advocated:1 solves:2 soc:1 implemented:1 c:1 direction:4 closely:1 sssr:1 sgn:1 proposition:1 strictly:1 hold:1 sufficiently:1 considered:2 ground:1 great:1 algorithmic:1 claim:1 substituting:1 achieves:1 estimation:4 combinatorial:1 reiman:1 largest:1 minimization:7 gaussian:10 modified:1 shrinkage:5 likelihood:2 check:2 cg:1 kim:2 dependent:1 stopping:2 el:2 pqn:1 relation:1 koller:1 wij:1 arg:13 among:1 dual:12 html:5 art:2 special:2 smoothing:2 field:1 construct:1 equal:2 never:1 identical:2 leukemia:1 np:1 nonsmooth:1 others:1 report:1 randomly:3 beck:2 pobj:2 floating:1 friedman:2 organization:1 interest:1 mining:1 zhaosong:1 semidefinite:1 primal:12 behind:1 bundle:1 accurate:1 edge:3 desired:1 theoretical:2 column:1 xeon:1 teboulle:2 exchanging:1 cost:1 introducing:1 vertex:1 entry:5 too:1 reported:2 kn:3 proximal:3 synthetic:8 siam:7 off:1 connectivity:1 again:1 satisfied:1 shiqian:1 ieor:2 yp:1 li:3 prox:3 de:1 north:1 explicitly:1 try:2 closed:2 doing:1 competitive:1 start:1 contribution:1 minimize:2 accuracy:2 efficiently:1 correspond:1 produced:5 iid:1 lu:5 alternatingly:2 submitted:1 definition:1 sinco:2 inexpensive:1 inexact:1 dm:2 proof:1 di:2 workstation:1 proved:2 recall:2 improved:3 done:1 shrink:3 until:2 hand:2 replacing:1 nonlinear:1 banerjee:3 lack:3 slackness:1 schmidtm:1 logistic:1 name:1 ye:1 verify:1 multiplier:1 true:2 regularization:3 hence:12 alternating:16 symmetric:2 nonzero:3 goldfarb:4 conditionally:1 attractive:1 inferior:1 linearizing:1 criterion:2 trying:1 demonstrate:1 duchi:4 l1:1 variational:1 recently:1 association:1 interpretation:2 discussed:1 slight:1 numerically:1 refer:1 imposing:2 unconstrained:1 similarly:1 multivariate:2 perspective:2 optimizes:1 verlag:1 certain:1 n00014:1 inequality:1 onr:1 binary:1 yi:4 exploited:1 accomplished:1 fortunately:1 additional:3 impose:1 converge:1 ii:3 nonzeros:2 smooth:8 technical:1 faster:1 long:2 lin:1 ravikumar:1 promotes:1 qg:4 variant:2 regression:2 basic:1 breast:1 iteration:28 achieved:3 justified:1 subdifferential:1 xviii:1 crucial:1 envelope:1 exhibited:1 ascent:1 dobj:2 db:2 lafferty:1 alzheimer:1 split:1 enough:2 easy:2 iterate:1 independence:2 variate:1 gave:1 fit:2 hastie:1 lasso:3 restrict:2 pennsylvania:1 reduce:1 tm:1 det:11 expression:1 gb:1 effort:3 penalty:2 york:1 remark:1 matlab:5 amount:1 chuan:2 statist:1 continuation:1 http:4 outperform:1 xij:1 nsf:1 per:2 tibshirani:2 independency:1 iter:6 reformulation:2 nevertheless:1 neither:2 ram:1 graph:1 subgradient:3 monotone:1 imaging:1 sum:4 run:1 inverse:17 psm:25 everywhere:1 prob:1 uncertainty:1 ipms:1 almost:2 wu:1 sfu:1 appendix:1 thaliana:1 bound:14 quadratic:3 constraint:12 n3:2 software:1 bcd:5 min:10 optimality:4 performing:2 relatively:1 gould:1 department:3 according:1 combination:1 smaller:2 remain:2 appealing:1 making:1 ghaoui:2 scheinberg:5 discus:1 eventually:1 needed:2 know:1 urruty:1 tractable:1 end:3 adopted:2 available:5 operation:1 rewritten:1 apply:3 observe:1 spectral:5 schmidt:1 denotes:2 remaining:1 include:1 graphical:7 newton:1 exploit:2 build:1 establish:1 especially:1 unchanged:1 objective:9 question:1 already:1 occurs:1 degrades:1 strategy:2 minx:7 gradient:7 thank:1 estrogen:1 tseng:2 length:1 code:3 providing:1 minimizing:5 akad:1 difficult:1 subproblems:2 implementation:1 unknown:1 upper:2 markov:2 descent:4 truncated:1 defining:1 extended:1 glowinski:2 y1:1 rn:2 inverting:1 required:1 tallec:1 extensive:1 lymph:1 conflicting:1 hour:3 nip:1 address:1 suggested:1 dokl:1 below:1 pattern:3 usually:1 sparsity:10 program:1 built:1 max:12 memory:1 royal:1 wainwright:1 difficulty:1 regularized:2 advanced:1 representing:1 fortin:1 created:4 deemed:1 aspremont:3 columbia:3 philadelphia:1 review:1 prior:2 acknowledgement:1 relative:2 glasso:2 lecture:1 versus:1 downloaded:2 imposes:1 thresholding:1 cd:3 row:3 qf:3 cancer:1 echal:1 course:1 supported:1 last:1 keeping:1 taking:1 absolute:1 sparse:23 hereditary:1 ghz:1 curve:1 default:2 dimension:1 boundary:1 projected:3 approximate:2 obtains:1 patel:1 implicitly:1 status:1 gene:2 uai:1 continuous:4 search:2 iterative:1 table:9 learn:1 nature:1 terminate:1 ca:2 ignoring:1 obtaining:1 depicting:1 diag:4 did:2 main:1 terminated:2 n2:1 fair:1 complementary:1 augmented:7 intel:1 roc:1 vsm:27 slow:1 precision:1 fails:1 lehigh:2 theorem:5 formula:1 minute:3 sics:19 dk:3 dominates:1 rel:13 adding:2 false:1 drew:1 fg02:1 linearization:11 gap:16 chen:1 ngap:2 simply:3 saddle:1 lagrange:1 holland:1 applies:1 springer:1 corresponds:2 ubc:1 satisfies:2 determines:1 truth:1 ma:2 conditional:3 viewed:1 formulated:1 acceleration:1 replace:1 lipschitz:8 feasible:3 hard:1 change:2 included:1 specifically:3 professor:1 lemar:1 duality:4 svd:1 meaningful:2 mark:1 arises:1 accelerated:1 tested:1 |
3,422 | 41 | 814
NEUROMORPHIC NETWORKS BASED
ON SPARSE OPTICAL ORTHOGONAL CODES
Mario P. Vecchi and Jawad A. Salehi
Bell Communications Research
435 South Street
Morristown, NJ 07960-1961
Abstrad
A family of neuromorphic networks specifically designed for communications
and optical signal processing applications is presented. The information is encoded
utilizing sparse Optical Orthogonal Code sequences on the basis of unipolar, binary
(0,1) signals. The generalized synaptic connectivity matrix is also unipolar, and
clipped to binary (0,1) values. In addition to high-capacity associative memory,
the resulting neural networks can be used to implement general functions, such as
code filtering, code mapping, code joining, code shifting and code projecting.
1
Introduction
Synthetic neural nets[1,2] represent an active and growing research field . Fundamental
issues, as well as practical implementations with electronic and optical devices are being
studied. In addition, several learning algorithms have been studied, for example stochastically adaptive systems[3] based on many-body physics optimization concepts[4,5].
Signal processing in the optical domain has also been an active field of research.
A wide variety of non-linear all-optical devices are being studied, directed towards applications both in optical computating and in optical switching. In particular, the
development of Optical Orthogonal Codes (OOC)[6] is specifically interesting to optical communications applications, as it has been demonstrated in the context of Code
Division Multiple Access (CDMA)[7] .
In this paper we present a new class of neuromorphic networks, specifically designed
for optical signal processing and communications, that encode the information in sparse
OOC's. In Section 2 we review some basic concepts. The new neuromorphic networks
are defined in Section 3, and their associative memory properties are presented in Section
4. In Section 5 other general network functions are discussed. Concluding remarks are
given in Section 6.
2
2.1
Neural Networks and Optical Orthogonal Codes
Neural Network Model
Neural network are generally based on multiply-threshold-feedback cycles. In the Hopfield model[2], for instance, a connectivity T matrix stores the M different memory
elements, labeled m, by the sum of outer products,
M
Tij=Lu'iuj; i,j=1,2 ... N
m
? American Institute of Physics 1988
(1)
815
where the state vectors ym represent the memory elements in the bipolar (-1,1) basis.
The diagonal matrix elements in the Hopfield model are set to zero, Tii = O.
For a typical memory recall cycle, an input vector .!lin, which is close to a particular
memory element m = k, multiplies the T matrix, such that the output vector .!lout is
given by
N
? out
Vi
~T.
= L.J
in
ijVj
i,j = l,2 ... N
(2)
j=l
and can be seen to reduce to
vit ~ (N - l)u~ + J(N -
l)(M - 1)
(3)
for large N and in the case of randomly coded memory elements ym.
In the Hopfield model, each output ~out is passed through a thresholding stage
around zero. The thresholded output signals are then fed back, and the multiply and
threshold cycle is repeated until a final stable output .!lout is obtained. IT the input .!lin is
sufficiently close to y1c, and the number of state vectors is small (Le. M ~ N), the final
output will converge to memory element m = k, that is, .!lout -+ y1c. The associative
memory property of the network is thus established.
2.2
Optical Orthogonal Codes
The OOC sequences have been developed[6,7] for optical CDMA systems. Their properties have been specifically designed for this purpose, based on the following two conditions: each sequence can be easily distinguished from a shifted version of itself, and
each sequence can be easily distinguished from any other shifted or unshifted sequence
in the set. Mathematically, the above two conditions are expressed in terms of autoand crosscorrelation functions. Because of the non-negative nature of optical signals 1 ,
OOC are based on unipolar (0,1) signals[7].
In general, a family of OOC is defined by the following parameters:
- F, the length of the code,
- K, the weight of the code, that is, the number of l's in the sequence,
- >.a,
the auto-correlation value for all possible shifts, other than the zero shift,
- Ac , the cross-correlation value for all possible shifts, including the zero shift.
For a given code length F, the maximum number of distinct sequences in a family
of OOC depends on the chosen parameters, that is, the weight of the code K and the
allowed overlap AaandAc. In this paper we will consider OOC belonging to the minimum
overlap class, Aa
Ac 1.
= =
lWe refer to optical inten6ity signals, and not to detection systems sensitive to phase information.
816
3
Neuromorphic Optical Networks
Our neuromorphic networks are designed to take full advantage of the properties of the
~OC. The connectivity matrix T is defined as a sum of outer products, by analogy with
(1), but with the following important modifications:
1. The memory vectors are defined by the sequences of a given family of OOC, with a
basis given by the unipolar, binary pair (0,1). The dimension of the sparse vectors
is given by the length of the code F, and the maximum number of available items
depends on the chosen family of ~OC.
2. All ofthe matrix elements Ti; are clipped to unipolar, binary (0,1) values, resulting
in a sparse and simplified connectivity matrix, without any loss in the functional
properties defined by our neuromorphic networks.
3. The diagonal matrix elements Tii are not set to zero, as they reflect important
information implicit in the OOC sequences.
4. The threshold value is not zero, but it is chosen to be equal to K, the weight of
the ~OC.
5. The connectivity matrix T is generalized to allow for the possibility of a variety
of outer product options: self-outer products, as in (1), for associative memory,
but also cross-outer products of different forms to implement various other system
functions.
A simplified schematic diagram of a possible optical neuromorphic processor is shown
in Figure 1. This implementation is equivalent to an incoherent optical matrix-vector
multiplier[8], with the addition of nonlinear functions. The input vector is clipped using
an optical hard-limiter with a threshold setting at 1, and then it is anamorphic ally
imaged onto the connectivity mask for T. In this way, the ith pixel of the input vector
is imaged onto the ith column of the T mask. The light passing through the mask is
then anamorphically imaged onto a line of optical threshold elements with a threshold
setting equal to K, such that the jth row is imaged onto the lh threshold element.
4
Associative Memory
The associative memory function is defined by a connectivity matrix
TMEM
given by:
(4)
where each memory element ~m corresponds to a given sequence of the OOC family,
with code length F. The matrix elements of TMEM are all clipped, unipolar values, as
indicated by the function gn, such that,
g{ (}
? ifif (( <~ 1
={ 1
1
(5)
817
We will now show that an input vector ~Ie, which corresponds to memory element
m = k, will produce a stable output (equal to the wanted memory vector) in a single
pass of the multiply and threshold process.
The multiplication can be written as:
(6)
We remember that the non-linear clipping function
T
. Hence,
-MEM
an is to be applied first to obtain
v~t
= ~:z:'!
~. J:z:~:z:'!'}
,
L.JJ a {:z:'!:z:'!
'J + L
'J
j
(7)
m#;1e
For :z:~ = 0, only the second term in (7) contributes, and the pseudo-orthogonality
properties of the OOC allow us to write:
(8)
where the cross-correlation value is Ac < K.
For :z:~ = 1, we again consider the properties of the OOC to obtain for the first term
of (7):
(9)
where K is the weight of the OOC.
Therefore, the result of the multiplication operation given by (7) can be written as:
A
out
Vi
=
K
Ie
:Z:i
+
[value strictly
less than K
1
(10)
The thresholding operation follows, around the value K as explained in Section 3.
That is, (10) is thresholded such that:
vit =
{
1 if v~t
o
>K
<K,
,-
,
ifv~t
(11)
hence, the final output at the end of a single pass will be given by: v:u t = :z:~.
The result just obtained can be extended to demonstrate the single pass convergence
when the input vector is close, but not necessarily equal, to a stored memory element.
We can draw the following conclusions regarding the properties of our neuromorphic
networks based on OOC:
? For any given input vector ~in, the single pass output will correspond to the
memory vector ~m which has the smallest Hamming distance to the input .
? If the input vector ~in is missing a single 1-element from the K l's of an OOC,
the single pass output will be the null or zero vector.
818
? If the input vector !lin has the same Hanuning distance to two (or more) memory
vectors ~m , the single pass output will be the logical sum of those memory vectors.
The ideas just discussed were tested with a computer simulation. An example of
associative memory is shown in Table 1, corresponding to the OOC class of length
F = 21 and weight K = 2. For this case, the maximum number of independent
sequences is M = 10. The connectivity matrix TMEM is seen in Table 1, where one can
clearly appreciate the simplifying features of our model, both in terms of the sparsity
and of the unipolar, clipped values of the matrix elements. The computer simulations for
this example are shown in Table 2. The input vectors ~ and Qshow the error-correcting
memory recovery properties. The input vector ~ is equally distant to memory vectors
e3 and ~8, resulting in an output which is the sum (e 3 EB e8 ). And finally, input vector
d is closest to ~\ but one 1 is missing, and the output is the zero vector. The mask
in Figure 1 shows the optical realization of the Table 1, where the transparent pixels
correspond to the l's and the opaque pixels to the O's ofthe connectivity matrix TMEM.
It should be pointed out that the capacity of our network is significant. From the
previous example, the capacity is seen to be ::::: F /2 for single pass memory recovery.
This result compares favorably with the capacity of a Hopfield model[9], of ~ F / 41n F.
5
General Network Functions
Our neuromorphic networks, based on OOC, can be generalized to perform functions
other than associative memory storage by constructing non-symmetrical connectivity
matrices. The single pass convergence of our networks avoids the possibility of limitcycle oscillations. We can write in general:
Tii =
g{t Yf'Zj} ,
(12)
m=l
where each pair defined by m includes two vectors ym and em, which are not necessarily
equal. The clipping function 9 {} insures that all m;:trix elements are binary (0,1) values.
The possible choice of vector pairs is not completely arbitrary, but there is a wide variety
of functions that can be implemented for each family of OOC. We will now discuss some
of the applications that are of particular interest in optical communication systems.
S.l
Code Filtering (CDMA)
Figure 2 shows an optical CDMA network in a star configuration. M nodes are interconnected with optical fibers to a passive MxM star coupler that broadcasts the optical
signals. At each node there is a data encoder that maps each bit of information to the
OOC sequence corresponding to the user for which the transmission is intended. In
addition, each node has a filter and decoder that recognizes its specific OOC sequence.
The optical transmission rate has been expanded by a factor F corresponding to the
length of the OOC sequence. Within the context of a CDMA communication system[7],
the filter or decoder must perform the function of recognizing a specific OOC sequence
in the presence of other interfering codes sent on the common transmission medium.
819
We can think, then, of one of our neuromorphic networks as a filter, placed at a given
receiver node, that will recognize the specific code that it was programmed for.
We define for this purpose a connectivity matrix as
TijCDMA
Ie Ie
=ziZj;
??
1.,}=
1 , 2 ... F ,
(13)
where only one vector ~Ie is stored at each node. This symmetric, clipped connectivity
matrix will give an output equal to ~Ie whenever the input contains this vector, and a
null or zero output vector otherwise. It is clear by comparing (13) with (4) that the
CDMA filtering matrix is equivalent to an associative memory matrix with only one
item imprinted in the memory. Hence the discussion of Section 4 directly applies to the
understanding of the behaviour of T CDMA
In order to evaluate the performance of our neuromorphic network as a CDMA
filter, computer simulations were performed. Table 3 presents the T CDM A matrix for
a particular node defined by ~Ie of a CDMA system based on the OOC family F = 21,
K = 2. The total number of distinct codes for this OOC family is M = 10, hence there
are 9 additional OOC sequences that interfere with ~Ie, labeled in Table 3 ~l to ~9.
The performance was simulated by generating random composite sequences from the
set of codes ~l to ~9 arbitrarily shifted. All inputs are unipolar and clipped (0,1) signals.
The results presented in Table 4 give examples of our simulation for the T CDMA matrix
shown in Table 3. The input Q is the (logical) sum of a I-bit (vector ~Ie), plus interfering
signals from arbitrarily shifted sequences of ~2, ~3, ~4, ~6 and ~9. The output of the
neuromorphic network is seen to recover accurately the desired vector ~Ie. The input
vector Q contains a O-bit (null vector), plus the shifted sequences of ~l, ~2, ~3, ~6, ~7
and ~8, and we see that the output correctly recovers a O-bit.
As discussed in Section 4, our neuromorphic network will always correctly recognize
a I-bit (vector ~Ie) presented to its input. On the other hand 2, there is the possibility of
making an error when a O-bit is sent, and the interfering signals from other nodes happen
to generate the chip positions of ~Ie. This case is shown by input vector ~ of Table 4,
which contains a O-bit (null vector), plus shifted sequences of ~2, ~3, ~4, ~6, ~6, ~7 and
~8 in such a way that the output is erroneously given as a I-bit. The properties of the
OOC sequences are specifically chosen to minimize these errors(7], and the statistical
results of our simulation are also shown in Table 4. It is seen that, as expected, when
a I-bit is sent it is always correctly recognized. On the other hand, when O-bits are
sent, occasional errors occur. Our simulation, yields an overall bit error rate (BER) of
BER.im 5.88%, as shown in Table 4.
These results can be compared with theoretical calculations[7] which yield an estimate for the BER for the CDMA system described:
=
K-l
1
B ER calc~-
2
IT [1-qM-l-le] ,
(14)
Ie=O
=
where q 1 - ~. For the example of the OOC family F = 21, K = 2, with M = 10,
the above expression yields BERcalc :::::: 5.74%.
20 ur channel can be described, then, as a binary Z-channel between each two nodes dynamically
establishing a communication path
820
It is seen, therefore, that our neuromorphic network approaches the minimum possible BER for a given family of OOC. In fact, the results obtained usin~ our T CDMA
are equivalent CDMA detection scheme based on "optical-AND-gates,,[1 1, which corresponds to the limiting BER determined by the properties of the OOC themselves 3 .
The optical mask corresponding to the code filtering function is shown in Figure 3.
5.2
Other Functions
As a first example of a non-symmetric T matrix, let us consider the function of mapping
an input code to a corresponding different output code. We define our mapping matrix
as:
T;fAP
= g {~Y'Zj}
; i,i =
l,2 ... F,
(15)
where an input vector ~m will produce a different output vector code llm.
The function of code joining is defined by a transfer function that takes a given
input code and produces at the output a chosen combination of two or more codes.
This function is performed by expressing the general matrix given by 12 as follows:
TijJ01N
where an input vector
--
~m
r!
"(
'!:I {~
Yim + wim + .. ,)Zjm}
. &,1
. . -- 1 , 2 ... F ,
I
(16)
will result in an output that joins several vector codes (Il m E9
wmffi ... ).
The code shifting matrix TSHIFT will allow for the shift of a given code sequence,
such that both input and output correspond to the same code, but shifted with respect
to itself. That is,
(17)
where we have indicated an unshifted code sequence by ~(O)m, and its corresponding
output pair as a shifted version of itself ~(s)m.
The code projecting function corresponds to processing an input vector that contains
the logical sum of several codes, and projecting at the output a selected single code
. glven
.
b y:
? T PROJ IS
sequence. The corresponding matrIx
TijPROJ
m( Yjm +Wjm +... ) }1.&
. ., 1
- -1, 2 ... F ,
- r! {~Zi
"
-'!:I
(18)
where each input vector (~m ffi w m ffi ... ) will project at the output to a single code
~m. In general, the resulting output code sequence ~m could correspond to a code not
necessarely contained in the input vector.
The performance and error correcting properties of these, and other, general functions follow a similar behaviour as discussed in Section 4.
3The BER for the OOC family shown in this example are far too large for a useful CDMA communications system. Our choice intended to show computer simulated results within a reasonable
computation time.
821
6
Conclusions
The neuromorphic networks presented, based on sparse Optical Orthogonal Code (OOC)
sequences, have been shown to have a number of attractive properties. The unipolar,
clipped nature of the synaptic connectivity matrix simplifies the implementation. The
single pass convergence further allows for general network functions that are expected
to be of particular interest in communications and signal processing systems.
The coding of the information, based on ~OC, has also been shown to result in high
capacity associative memories. The combination of efficient associative memory properties, plus a variety of general network functions, also suggests the possible application
of our neuromorphic networks in the implementation of computational functions based
on optical symbolic substitution.
The family of neuromorphic networks discussed here emphasizes the importance of
understanding the general properties of non-negative systems based on sparse codes[lll.
It is hoped that our results will stimulate further work on the fundamental relationship
between coding, or representations, and the information processing properties of neural
nets.
Acknowledgement
We thank J. Y. N. Hui and J. Alspector for many useful discussions, and C. A. Brackett for his support
and encouragement of this research.
References
[1] S. Grossberg. In K. Schmitt, editor, Delay and Functional-Differential Equation6 and Their ApplicatioN, page 121, Academic Press, New York, NY, 1972.
[2] J. J. Hopfield. Neural Networks and Physical Systems with Emergent Collective Computational
Abilities. Proc. Nat. Acad. Sci. USA, 79:2254, 1982.
[3] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A Learning Algorithm for Boltzmann Machines.
Cogn. Sci., 9:147, 1985.
[4] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchio Optimization by Simulated Annealing. Science,
220:671, 1983.
[5] M. P. Vecchi and S. Kirkpatrick. Global Wiring by Simulated Annealing. IEEE Tran6. CAD of
Integrated Circuit. and Sydem6, CAD-2:215, 1983.
[6] F. R. K. Chung, J. A. Salehi, and V. K . Wei. Optical Orthogonal Codes: Design, Analysis and
Applications. In IEEE International Symp06ium on Information Theory, Catalog No. 86CH!374-7,
1986. Accepted for publication in IEEE Trans. on Information Theory.
[7] J. A. Salehi and C. A. Brackett. Fundamental Principles of Fiber Optics Code Division Multiple
Access. In IEEE International Conference on CommunicatiON, 1987.
[8] N. H. Farhat, D. Psaltis, A. Prata, and E. Paek. Optical Implementation of the Hopfield Model.
Appl. Opt., 24:1469, 1985.
[9] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh. The Capacity of Hopfield
Associative Memory. IEEE Tran6. on Information Theory, IT-33:461, 1987.
[10] J. A . Salehi. Principles and Applications of Optical AND Gates in Fiber Optics Code Division
Multiple Access Networks. In preparation, 1987.
[11] G. Palm. Technical comments. Science, 235:1226, 1987.
822
T.... I: A_wi .. Yo..,. Eo .......
~.....
_ ... 0110 wI ...
0OCf.....
. . . ~ .. ~IO .........
r .,
OOC J'anolq.
II. It
=I
? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
?
.
,
?
.- ?? ?? ?? ? ?? ?, ? ?? ?? ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
,
,
,,
CodoVod_
1 1 ??
OOC J'aaoi\r1
I....,.",.'" Co4e ... H . .
?
?
??????? I
I
I
1
I
I'
Cotuoec,a.II, Wain.
1
??????? ??
, ?? ?? ?? ?? ?? ?? ?? ?? ??? ?? ? ???, ?? ??? ?? ??? ??? ?? ??? ???, ???
? ?? ?? ?? ?? ?, ?? ?? ?? ?? ?? ?? ? ?? ?, ?? ?,? ?? ?? ?? ? ??
????? ?????? ?? ? ? ? ? ??
eo..-lintJ
r??1
? ?? ?? ?? ??
?
?
?
?
?
?
,
??? ??? ??? ??? ???
?? ?? ?, ? ? ?? ?, ?? ?? ?? ?? ??
? ? ? ?? ??
?? ?? ? ? ? ?? ? ? ?? ??
? ??
?
? ?
???
??
???
???
?
?
?
??
??
???
?? ?
?
?
??
???
??
?,??
?? '?. ??? ??? ??? ???
??,
?? ?? ? ?? ?? ?? ??
?
? :1
I'
r
I:
II. It
=I
????? I
r v. ' ,
1
I'
I-
1
1 I ?
1 1
I
Malrb
? ? ?
I
,
?
???
??
I
I
I
1
1
I
I
1
I
.1 ,..?' ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
" ? ?
????????????
," ? ? ? ? ?? ?, ? ?? ?? ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
. ?? ?? ?? ?? ?? ?? ?? ? ?? ?? ?, ??, ? ?? ?? ?? ?? ? ?? ?? ??
,?" ?? ?? ?? ?? ? ??, ?? ?? ?? ?? ?? ? ?? ? ?? ?? ?? ?? ?? ? ??
????? ????????? ????
....
~._IaJ
I
~
I
I
?
I
Co4.I
? ?
1
?
I
I
1
I
1
I
I
I
I
T.... I: A-o.&i.. 11-.,. ~ ~ ~ ...
- _ .... - ... ,..." _ria P. ;;. 'bWo I.
1IIr ............
'bWo.,
a.-a.. ........... .....,._
....
('
ooc ...." r ?
, ? ? ? ? ? ? ? ? ? ? ? ? ? ? .:
? , ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? '1
? ?.u.'-"-,
? ? ???? ? ? ? ? ? ? "
...........
......
. ??????? ???? ???????
??
IIIanmoI.& ..
r:: ?...? ? ? ? ? ? ? ? ?
, ???,??
? , ?????? ? ?
.. v.....
I
I
I
.............W - _ ? ? ?
0.1,.. V ......
I
I
I
I
o..~v
I
I
D
Ll.,.. V _
C
...
I
1 ............
Out,.. v ......
~
I
?
?
I
?
I
?
I
I
I
lot.... " 4Iot_"-~
I
?
?
?
I
I:
a"
????????
l
I
' 0
? ????
????
'.,
????
I
"
?
?
:&CHl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
, ?.-a-., , ?a"!, , ? ? , ?
????????, ????? ??? ? ? ?
, ??
I.loop.. v_ .. r.....~l.a-' .,' ? ? ? "
c T' ?? I
Ou'_V_ _
C
I
I ?
I
I
I
s..,.....aI .....no..
I
????????????????????
? ? ? ? ? ?J
??????
??
r.... V_= _ . '????..-??.? , .o'
I? WI
...11
MIOJf
N?I
ltol
Outp.. V......
UN
r. 1I,1t. I
.1"1""."'1
.1????? ............. 0,
................., . . . . " - .
~
................
~;~.--)."""
OOC hili?!
?
I
.....
........ - . -'iInriIr ................... - .
II, It .. I
~
I
p_.
c.. _
FI\orioc
_
_IC'DIIU).
..... _~
... SOC"',, so-IalioL
_ ..
.'
"'1. EMIl ............ .., ................. ...
r -..... -
L..... V _
I
l
???
??
?
?
I
?
??
??
???
???,
I
I
??????
BU"
0-
?
Il.""
TogJ I
I_II
ltol
I ...
823
Figure 1:
Schematic diagram of an optical neuromorphic processor using sparse Optical Orthogonal Codes. Notice the absence oHeedback because ofthe single-pass
convergence. The mask shown represents the realisation of the content-addressable memory of Table 1.
Figure 3:
Optical realization oC a code filtering (CDMA) maslc
o( Table 3. The l's are represented by the transpar'
ent pixels, and the 0'. by the opaque pixels.
Nkll
PASSM
STAll
COUI'lEIIS
Figure 2:
Schematic diagram of a COMA communications system over an Optical Fiber interconnection network.
Each node represents one of the M possible distinct
users in the system.
| 41 |@word version:2 simulation:6 simplifying:1 configuration:1 contains:4 substitution:1 comparing:1 cad:2 written:2 must:1 distant:1 happen:1 wanted:1 designed:4 v_:3 selected:1 device:2 item:2 ria:1 ith:2 node:9 differential:1 salehi:4 mask:6 expected:2 alspector:1 themselves:1 growing:1 lll:1 project:1 circuit:1 medium:1 null:4 developed:1 nj:1 pseudo:1 remember:1 ti:1 morristown:1 bipolar:1 qm:1 io:1 switching:1 acad:1 joining:2 establishing:1 path:1 plus:4 eb:1 studied:3 dynamically:1 suggests:1 appl:1 programmed:1 p_:1 directed:1 practical:1 grossberg:1 implement:2 cogn:1 ooc:35 addressable:1 bell:1 composite:1 symbolic:1 unipolar:9 close:3 onto:4 storage:1 context:2 equivalent:3 map:1 demonstrated:1 missing:2 vit:2 recovery:2 correcting:2 utilizing:1 his:1 posner:1 limiting:1 user:2 element:17 labeled:2 ackley:1 cycle:3 e8:1 tran6:2 division:3 basis:3 completely:1 easily:2 hopfield:7 chip:1 emergent:1 various:1 fiber:4 represented:1 distinct:3 sejnowski:1 encoded:1 otherwise:1 interconnection:1 encoder:1 ability:1 think:1 itself:3 ifv:1 final:3 associative:12 sequence:26 advantage:1 net:2 emil:1 interconnected:1 product:5 loop:1 realization:2 imprinted:1 ent:1 convergence:4 chl:1 transmission:3 r1:1 produce:3 generating:1 ac:3 soc:1 implemented:1 filter:4 behaviour:2 transparent:1 opt:1 mathematically:1 im:1 strictly:1 unshifted:2 around:2 sufficiently:1 ic:1 mapping:3 smallest:1 purpose:2 proc:1 psaltis:1 limiter:1 wim:1 sensitive:1 clearly:1 cdm:1 always:2 publication:1 encode:1 yo:1 integrated:1 proj:1 ocf:1 pixel:5 issue:1 overall:1 multiplies:1 development:1 field:2 equal:6 represents:2 realisation:1 randomly:1 recognize:2 phase:1 intended:2 detection:2 interest:2 possibility:3 multiply:3 kirkpatrick:2 light:1 calc:1 lh:1 orthogonal:8 desired:1 theoretical:1 instance:1 lwe:1 column:1 gn:1 neuromorphic:19 clipping:2 recognizing:1 delay:1 too:1 stored:2 iir:1 synthetic:1 fundamental:3 international:2 ie:13 bu:1 physic:2 ym:3 vecchio:1 connectivity:13 again:1 reflect:1 broadcast:1 e9:1 stochastically:1 american:1 chung:1 crosscorrelation:1 tii:3 star:2 coding:2 includes:1 vi:2 depends:2 performed:2 lot:1 mario:1 recover:1 option:1 minimize:1 il:2 correspond:4 ofthe:3 yield:3 accurately:1 emphasizes:1 lu:1 processor:2 whenever:1 synaptic:2 farhat:1 recovers:1 hamming:1 logical:3 recall:1 ou:1 back:1 rodemich:1 follow:1 wei:1 iaj:1 just:2 stage:1 implicit:1 until:1 correlation:3 hand:2 ally:1 mceliece:1 nonlinear:1 interfere:1 yf:1 indicated:2 stimulate:1 usa:1 concept:2 multiplier:1 hence:4 imaged:4 symmetric:2 attractive:1 wiring:1 ll:1 self:1 oc:5 generalized:3 demonstrate:1 passive:1 hilus:1 fi:1 limitcycle:1 common:1 functional:2 physical:1 discussed:5 refer:1 significant:1 expressing:1 ai:1 encouragement:1 pointed:1 access:3 stable:2 zizj:1 closest:1 store:1 binary:6 arbitrarily:2 seen:6 minimum:2 additional:1 eo:2 recognized:1 converge:1 signal:13 ii:4 multiple:3 full:1 technical:1 academic:1 calculation:1 cross:3 lin:3 equally:1 coded:1 schematic:3 basic:1 mxm:1 represent:2 addition:4 annealing:2 diagram:3 usin:1 south:1 comment:1 sent:4 presence:1 variety:4 zi:1 stall:1 reduce:1 regarding:1 idea:1 simplifies:1 shift:5 expression:1 passed:1 e3:1 passing:1 york:1 jj:1 remark:1 tij:1 generally:1 clear:1 useful:2 generate:1 zj:2 shifted:8 notice:1 correctly:3 write:2 y1c:2 threshold:8 thresholded:2 sum:6 opaque:2 clipped:8 family:13 reasonable:1 electronic:1 oscillation:1 draw:1 bit:11 occur:1 optic:2 orthogonality:1 erroneously:1 vecchi:2 concluding:1 expanded:1 optical:38 palm:1 combination:2 belonging:1 em:1 ur:1 wi:2 modification:1 making:1 projecting:3 explained:1 discus:1 ffi:2 fed:1 end:1 available:1 operation:2 occasional:1 yim:1 distinguished:2 gelatt:1 gate:2 paek:1 recognizes:1 cdma:15 appreciate:1 outp:1 diagonal:2 distance:2 thank:1 simulated:4 capacity:6 street:1 outer:5 decoder:2 sci:2 code:47 length:6 relationship:1 favorably:1 negative:2 implementation:5 design:1 collective:1 boltzmann:1 perform:2 extended:1 communication:11 hinton:1 yjm:1 arbitrary:1 venkatesh:1 pair:4 catalog:1 established:1 trans:1 lout:3 sparsity:1 including:1 memory:31 shifting:2 overlap:2 scheme:1 incoherent:1 auto:1 review:1 understanding:2 acknowledgement:1 multiplication:2 loss:1 interesting:1 filtering:5 analogy:1 schmitt:1 thresholding:2 editor:1 principle:2 interfering:3 row:1 placed:1 jth:1 allow:3 ber:6 institute:1 wide:2 sparse:8 feedback:1 dimension:1 avoids:1 adaptive:1 simplified:2 far:1 global:1 active:2 mem:1 receiver:1 symmetrical:1 llm:1 un:1 table:13 nature:2 channel:2 transfer:1 contributes:1 necessarily:2 constructing:1 domain:1 ifif:1 iuj:1 repeated:1 allowed:1 body:1 join:1 ny:1 position:1 specific:3 er:1 importance:1 hui:1 hoped:1 nat:1 insures:1 expressed:1 contained:1 trix:1 applies:1 aa:1 corresponds:4 ch:1 towards:1 absence:1 content:1 hard:1 specifically:5 typical:1 determined:1 wjm:1 total:1 pas:10 accepted:1 support:1 computating:1 preparation:1 evaluate:1 tested:1 |
3,423 | 410 | Comparison of three classification techniques,
CART, C4.5 and Multi-Layer Perceptrons
A C Tsoi
R A Pearson
Department of Electrical EngineeringDepartment of Computer Science
University of Queensland
Aust Defence Force Academy
St Lucia, Queensland 4072
Campbell, ACT 2600
Australia
Australia
Abstract
In this paper, after some introductory remarks into the classification problem as considered in various research communities, and some discussions
concerning some of the reasons for ascertaining the performances of the
three chosen algorithms, viz., CART (Classification and Regression Tree),
C4.5 (one of the more recent versions of a popular induction tree technique known as ID3), and a multi-layer perceptron (MLP), it is proposed
to compare the performances of these algorithms under two criteria: classification and generalisation. It is found that, in general, the MLP has better
classification and generalisation accuracies compared with the other two
algorithms.
1
Introduction
Classification of data into categories has been pursued by a number of research
communities, viz., applied statistics, knowledge acquisition, neural networks.
In applied statistics, there are a number of techniques, e.g., clustering algorithms
(see e.g., Hartigan), CART (Classification and Regression Trees, see e.g., Breiman
et al). Clustering algorithms are used when the underlying data naturally fall into a
number of groups, the distance among groups are measured by various metrics [Hartigan]. CART [Breiman, et all has been very popular among applied statisticians.
It assumes that the underlying data can be separated into categories, the decision
boundaries can either be parallel to the axis or they can be a linear combination
of these axes!. Under certain assumptions on the input data and their associated
lIn CART, and C4.5, the axes are the same as the input features
963
964
Tsoi and Pearson
output categories, its properties can be proved rigorously [Breiman et al]. The way
in which CART organises its data set is quite sophisticated. For example, it grows
a number of decision trees by a cross validation method.
Knowledge acquisition is an important topic in expert systems studies, see e.g.,
Charniak, McDermott. In this case, one is presented with a subset of input output
examples drawn from the set of all possible input output examples exhibited by the
underlying system. The problem is how to "distill" a set of rules describing the set
of input output examples. The rules are often expressed in the form of "if statement
1, then statement 2, else statement 3". Once this set of rules is obtained, it can
be used in a knowledge base for inference or for consulting purposes. It is trivial
to observe that the rules can be represented in the form of a binary tree structure.
In the process of building this binary tree, the knowledge acquisition system must
learn about the set of input output examples. Often this problem is pursued in the
machine learning community, see e.g., Michalski et al.
One of the most popular induction tree algorithms is known as ID3, or its later
variants, known as C4 (see e.g., Quinlan, Utgoff). There has not been any explicit
mention of the underlying assumptions on the data. However, it can be postulated
that for an induction tree technqiue to work efficiently, there must be some underlying assumptions on the data set considered. By analogy with CART, it can be
observed that an important underlying assumption must be that the data can be
divided into categories, the decision boundaries must be parallel to the axes (i.e., it
does not find a linear combination of the underlying axes to form a possible decision
boundary). In contrast to CART, and similar technqiues, it does not yet have a
rigorous theoretical basis. Its learning algorithm, and the way in which it organises
the data set are somewhat different from CART.
Recently, there is considerable activities in the study of yet another classification
method, known generally as an artificial neural network (ANN) approach (see e.g.,
Hecht-Nielson). In this approach, the idea is to use a system consisting of artificial neurons with very simple internal dynamics, interconnected to each other for
modelling a given set of input output examples. In this approach, one selects an
architecture of interconnection of artificial neurons, and a learning algorithm for
finding the unknown parameters in the architecture. A particular popular ANN architecture is known as a multi-layer perceptron (MLP). In this architecture, signal
travels in only one direction, i.e., there is no feedback from the output to the input.
A simple version of this architecture, consisting of only input and output layers
of neurons was popularised by Rosenblatt in the 1950's and 1960's. An improved
version incorporating possibly more than one layer of hidden layer neurons has been
used in the more recent past. A learning algorithm for finding the set of unknown
parameters in this architecture while minimising a least square criterion is known
as a back propagation algorithm. (see e.g., Rumelhart, McClelland).
There have been much analysis recently in understanding why a MLP can be used
in classifying given input output examples, and what underlying assumptions are
required (see e.g., Cybenko, Hornik et al). It can be proved that the MLP can
be used to approximate any given nonlinear input output mapping given certain
not too restrictive assumptions on the mapping, and the underlying input output
variables.
Comparison of Three Classification Techniques
Given that the three methods mentioned above, viz., CART, C4.5 (the latest version
of the C4 Induction Tree methodology), and MLP, all enjoy popularity in their
respective research communities, and that they all perform classification based on
a given set of input output examples, a natural question to ask is: how do they
perform as compared with one another.
There might be some objections to why a comparison among these algorithms is
necessary, since each is designed to operate under some predetermined conditions.
Secondly, even if it is shown that a particular algorithm performs better for a set of
particular examples, there is no guarantee that the algorithm will perform better
under a different set of circumstances. Thus, this may throw some doubt on the
desirability of making a comparison among these algorithms.
As indicated above, each algorithm has some underlying assumptions on the construction of a data model, whether these assumptions are made explicit or not. In
a practical problem, e.g., power system forecasting [Atlas et al] it is not possible
to determine the underlying assumptions in the data. But on an artificially generated example, it is possible to constrain the data so that they would have the
desirable characteristics. From this, it is possible to at least make some qualitative
statements concerning the algorithms. These qualitative statements may guide a
practitioner to watch out for possible pitfalls in applying a particular algorithm to
practical problems. Hence, it is worthwhile to carry out comparison studies.
The comparison question is not new. In fact there are already a number of studies carried out to compare the performances of some of or all three algorithms
mentioned 2 . For example, Atlas et al compared the performances of CART and
MLP. In addition they have considered the performances of these two algorithms to
a practical problem, viz., the power system forecasting. Dietterich et al compared
the performances of ID3 and MLP, and have applied them to the Text to Speech
mapping problem. In general, their conclusions are that the MLP is more accurate
in performing generalisation on unseen examples, while the ID3 or CART is much
faster in performing the classficiation task.
In this paper, we will consider the performances of all three algorithms, viz., CART,
C4.5 and MLP on two criteria:
? Classification capabilities
? Generalisation capabilities
In order to ascertain how these algorithms will perform, we have chosen to study
their performances using a closed set of input output examples. In this aspect, we
have chosen a version of the Penzias example, first considered by Denker et al. This
class of problems has been shown to require at least one hidden layer in a MLP
architecture, indicating that the relationship between the input and output is nonlinear. Secondly, the problem complexity depends on the number of input neurons
(in Cart and C4.5, input features). Hence it is possible to test the algorithms using
a progressively complex set of examples.
We have chosen to compare the algorithms under the two critieria because of the
2Both Atlas et. al, and DieUrich et al were brought to our attention during the conferenCe. Hence some of their conclusions were only communicat.ed to us at that time
965
966
Tsoi and Pearson
fact that some of them, at least, in the case of CART, were designed for classification purposes. It was not originally intended for generalisation purposes. By
generalisation, we mean that the trained system is used to predict the categories of
unseen examples when only the input variables are given. The predicted categories
are then compared with the true categories to ascertain how well the trained system
has performed.
The separate comparison is necessary because of the fact that classification and
generalisation are rather different. In classification studies, the main purpose is to
train a system to classify the given set of input output examples. The characteristics
are: good model of the data; good accuracy in classifying the given set of examples.
In generalisation, the main goal is to provide a good accuracy of prediction of output
categories on the set of unseen examples. It does not matter much if the results of
applying the trained data model to the training data set are less accurate.
An important point to note is that all the algorithms have a number of parameters
or procedures which allow them to perform better. For example, it is possible to
vary the a priori assumption on the occurrence of different output categories in
CART, while to perform a similar task in C4.5 or MLP is rather more difficult. It
is possible to train the MLP by ever increasing iterations until the error is small,
given sufficient number of hidden layer neurons. On the other hand, in C4.5, or
CART, the number of iterations is not an externally adjustable parameter.
In order to avoid pitfalls like these, as well as to avoid the criticism of favoring
one algorithm over against another, the results presented here have not consciously
tuned to give the best performance. For example, even though from observations,
we know that the distribution of different output categories is uneven, we have
not made any adjustments to the a priori probabilities in running CART. We will
assume that the output categories occur with equal prior probabilities. We have
not tuned the number of hidden layer neurons in the MLP, except we have taken a
particular number which has been used by others. We have not tuned the learning
rate, nor the momentum rate in the MLP except just a nominal default value which
appears to work for other examples. We have not tuned the C4.5 nor CART apart
from using the default values. Hopefully by doing this, the comparison will appear
fairer.
The structure of the paper is as follows: in section 2, we will describe the classification results, while in section 3 we will present generalisation results.
2
Comparison of classification performances
Before we present the results of comparing the performances of the algorithms, we
will give a brief description of the testing example used. This example is known as a
clump example in Denker et aI, while in Maxwell et al it is refered as the contiguity
example (see [Webb, Lowe]).
There are N input features, each feature can take only the values of 0 or 1. Thus
there are altogether 2N examples. The output class of a particular input feature
vector is the number of clumps involving l's in the input feature vector. Thus, for
example, if the input feature vector is 00110100, then this is in class 2 as there are
two distinct clumps of 1's in the input features. Hence it is possible to generate
Comparison of Three Classification Techniques
the closed set of all input output examples given a particular value of N. For
convenience, we will call this an Nth order Penzias example. In our case considered
here, we have used N = 8, i.e., there are 256 examples in the entire set. The input
features are binary equivalent of their ordinal numbers. For example, example 10
is 00001010. This allows us to denote any sample within the set more conveniently.
The distribution of the output classes are as follows :
class
1
2
3
total number
37
126
84
4
9
For classification purposes, we use all 256 examples as both the training and testing
data sets. The following table summarises the classificiation results.
name
cart
c4.5
mlp1
mlp2
#
of errors
96
105
117
47
accur %
0.625
0.59
0.54
0.82
where mlp1 and mlp2 are the values related to the MLP when it has run for 10000
iterations and 100000 iterations respectively. We have run the MLP in the following fashion: we run it 10000 times and then in steps of 10000 iterations but
at the beginning of each 10000 iterations it is run with a different initial parameter estimate. In this way, we can ensure that the MLP will not fall into a local
minimum. Secondly, we can observe how the MLP accuracies will improve with
increasing number of iterations. We found that in general, the MLP converges in
about 20000 iterations. After that the number of iterations the results do not improve by a significant amount. In addition, becasue of the way in which we run the
experiemnt the convergence would be closer to the average convergence rather than
the convergence for a particular initial condition.
The parameter values used in running the experiments are as follows: In the MLP,
both the learning rate and the momentum are set at 0.1. The architeture used
is: 8 input neurons, 5 hidden layer neurons, and 4 output neurons. In CART, the
prior probability is set to be equi-probable. The pruning is performed when the
probability of the leaf node is equal 0.5. In C4.5, all the default values are used.
We have also examined the ways in which each algorithm predicts the output categories. We found that none of the algorithms ever predict an output category of
4. This is interesting in that the output category 4 occurs only 9 times out of a
total possible of 256. Thus each algorithm, even though it mayor may not be able
to adjust the prior probability of the output categories, has made an implicit assumption of equal prior probability. This leads to the non occurrence of prediction
of category 4 as it is the least frequent occurred one.
Secondly, all algorithms have a default prediction. For example, in CART, the
default is class 2, being the most frequently occurred output category in the training
examples, while in the case of C4.5, the default is determined by the algorithm. On
the other hand, in the cases of CART, or MLP, it is not clear how the default cases
967
968
Tsoi and Pearson
are determined.
Thirdly, the algorithms make mistaken predictions at different places. For example,
for sample 1, C4.5 makes the wrong prediction of category 3 while MLP makes the
wrong prediction of 2, and CART makes the correct prediction. For sample 9, both
CART and C4.5 make a wrong prediction, while MLP makes the correct prediction.
3
Comparison of generalisation performances
We have used the same set of input output examples generated by an 8th order
Penzias example. For testing the generalisation capabilties, We have used the first
200 examples as the training vector set, and the rest of the vectors in the testing
data set.
The results are summarised in the following table:
training
name # of errors
cart
84
c4.5
97
mlpl 100
mlp2 50
testing
accur % # of errors
0.58
34
51.5
25
50
28
75
25
accur %
39.3
55.4
50
55.4
It is noted that the generalisation accuracy of the MLP is better than CART, and
is comparable to C4.5.
We have also examined closely the mistakes made by the algorithms as well as the
default predictions. In this case, the comments made in section 2 also appear to be
true.
4
Concl usions
In this paper, we considered three classification algorithms, viz., CART, C4.5, and
MLP. We compared their performance both in terms of classification, and generalisation on one example, an 8th order generalised Penzias example. It is found that
the MLP once it is converged, in general, has a better classification and generalisation accuracies compared with CART, or C4.5. On the other hand it is also noted
that the prediction errors made by each algorithm are different. This indicates that
there may be a possibility of combining these algorithms in such a way that their
prediction accuracies could be improved. This is presented as a challenge for future
research.
References
J. Hartigan. (1974) Clustering Algorithms. J. Wiley, New York.
L. Breiman, J .H. Friedman, R.A. Olshen, J. Stone. (1984) Classification and
Regression Trees. Wadsworth and Brooks, Monterey, Calif.
E. Charniak, D.
McDermott. (1985) Introduction to Artificial Intelligence. Ad-
Comparison of Three Classification Techniques
dision Wesley, Reading, Mass.
R. Michalski, J.G. Carbonell, T. Mitchell. (1983) Machine Learning: An Artificial
Intelligence Approach. Tioga, Palo Alto, Calif.
J .R. Quinlan. (1983) Learning efficient classification procedures and their application to Chess End Games. In R. Michalski et al (ed.), Machine Learning: An
Artificial Intelligence Approach. Tioga, Palo Alto, Calif.
J. R. Quinlan. (1986) Induction of Decision Trees. Machine Learning, 1, 81-106.
P. Utgoff. Incremental Induction of Decision Trees. Machine Learning, 4, 161-186.
R. Hecht-Nielson. (1990) Neurocomputing Addison Wesley, New York.
F. Rosenblatt. (1962) Principles of Neurodynamics. Spartan Books, Washington,
DC.
D. Rumelhart, J. McClelland. (1987) Parallel Distributed Processing: Exploration
in the Microstructure of Cognition Volume 1. MIT Press: Bradford Books.
G. Cybenko. (1989) Approximation by superpositions of sigmoidal function. Mathematics of Control, Signal, and Systems, 2:4.
K. Hornik, M. Stinchcombe, H. White. (1989) Multi-layer feedforward networks
are universal approximators. Neural Networks, 2:5, 359-366.
L. Atlas, R. Cole, Y. Muthusamy, A. Lippman, J. Connor, D. Park, M. EISharkawi, R. Marks II. (1990). A Performance Comparison of Trained Multilayer
Perceptrons and Trained Classification Trees. Proc IEEE, 78:10, 1614-1619.
T. Dietterich, H. Hild, G. Bakiri, (1990), "A Comparison ofID3 and Backpropagation for English Text-to-Speech Mapping", Preprint.
J. Denker, et al. (1987) Large automatic learning, rule extraction, and generalisation. Complex Systems, 3 877-922.
T. Maxwell, L. Giles, Y.C. Lee. (1987) Generalisation in neural networks, the
contiguity problem. Proc IEEE 1st Int Conf on Neural Networks, San Diego, Calif.
A.R. Webb, D. Lowe. (1990) The Optimised internal representation of multilayered
classifier networks performs nonlinear discriminant analysis. Neural Networks 3:4,
367-376.
969
| 410 |@word version:5 fairer:1 queensland:2 mention:1 carry:1 initial:2 charniak:2 tuned:4 past:1 comparing:1 yet:2 must:4 predetermined:1 designed:2 atlas:4 progressively:1 pursued:2 leaf:1 intelligence:3 beginning:1 consulting:1 equi:1 node:1 sigmoidal:1 popularised:1 qualitative:2 introductory:1 nor:2 frequently:1 multi:4 pitfall:2 increasing:2 underlying:11 alto:2 mass:1 what:1 contiguity:2 finding:2 guarantee:1 act:1 wrong:3 classifier:1 control:1 enjoy:1 appear:2 before:1 generalised:1 local:1 mistake:1 optimised:1 might:1 examined:2 clump:3 practical:3 tsoi:4 testing:5 backpropagation:1 lippman:1 procedure:2 universal:1 convenience:1 applying:2 equivalent:1 latest:1 attention:1 rule:5 construction:1 diego:1 nominal:1 rumelhart:2 predicts:1 observed:1 preprint:1 electrical:1 mentioned:2 utgoff:2 complexity:1 rigorously:1 dynamic:1 trained:5 basis:1 various:2 represented:1 train:2 separated:1 distinct:1 describe:1 artificial:6 spartan:1 pearson:4 quite:1 interconnection:1 statistic:2 unseen:3 id3:4 michalski:3 interconnected:1 frequent:1 combining:1 tioga:2 academy:1 description:1 convergence:3 accur:3 incremental:1 converges:1 measured:1 throw:1 predicted:1 direction:1 closely:1 correct:2 exploration:1 australia:2 require:1 microstructure:1 organises:2 cybenko:2 probable:1 secondly:4 communicat:1 hild:1 considered:6 mapping:4 predict:2 cognition:1 vary:1 purpose:5 proc:2 travel:1 superposition:1 palo:2 cole:1 brought:1 mit:1 defence:1 desirability:1 rather:3 avoid:2 breiman:4 ax:4 viz:6 modelling:1 indicates:1 contrast:1 rigorous:1 criticism:1 inference:1 entire:1 hidden:5 favoring:1 selects:1 classification:25 among:4 priori:2 wadsworth:1 equal:3 once:2 extraction:1 washington:1 park:1 future:1 others:1 neurocomputing:1 intended:1 consisting:2 statistician:1 friedman:1 mlp:27 possibility:1 adjust:1 accurate:2 closer:1 necessary:2 respective:1 tree:13 calif:4 theoretical:1 classify:1 giles:1 distill:1 subset:1 too:1 st:2 lee:1 possibly:1 conf:1 book:2 expert:1 doubt:1 int:1 matter:1 postulated:1 depends:1 ad:1 later:1 performed:2 lowe:2 closed:2 doing:1 parallel:3 capability:2 square:1 accuracy:7 characteristic:2 efficiently:1 consciously:1 none:1 converged:1 ed:2 against:1 acquisition:3 naturally:1 associated:1 proved:2 popular:4 ask:1 mitchell:1 knowledge:4 sophisticated:1 campbell:1 back:1 appears:1 wesley:2 maxwell:2 originally:1 methodology:1 improved:2 though:2 just:1 implicit:1 until:1 hand:3 nonlinear:3 hopefully:1 propagation:1 indicated:1 grows:1 name:2 dietterich:2 building:1 true:2 hence:4 white:1 during:1 game:1 noted:2 criterion:3 stone:1 performs:2 recently:2 volume:1 thirdly:1 occurred:2 significant:1 connor:1 ai:1 mistaken:1 automatic:1 mathematics:1 concl:1 base:1 recent:2 apart:1 certain:2 binary:3 approximators:1 mcdermott:2 minimum:1 somewhat:1 determine:1 signal:2 ii:1 desirable:1 faster:1 cross:1 minimising:1 lin:1 divided:1 concerning:2 hecht:2 prediction:12 variant:1 regression:3 involving:1 multilayer:1 circumstance:1 metric:1 mayor:1 iteration:9 nielson:2 addition:2 objection:1 else:1 operate:1 rest:1 exhibited:1 comment:1 cart:29 practitioner:1 call:1 feedforward:1 muthusamy:1 usions:1 architecture:7 idea:1 classficiation:1 whether:1 forecasting:2 speech:2 york:2 remark:1 generally:1 monterey:1 clear:1 amount:1 category:18 mcclelland:2 generate:1 popularity:1 rosenblatt:2 summarised:1 group:2 drawn:1 hartigan:3 ascertaining:1 run:5 place:1 decision:6 comparable:1 layer:11 activity:1 occur:1 constrain:1 lucia:1 aspect:1 performing:2 department:1 combination:2 ascertain:2 making:1 chess:1 refered:1 taken:1 describing:1 know:1 ordinal:1 addison:1 end:1 denker:3 observe:2 worthwhile:1 occurrence:2 altogether:1 assumes:1 clustering:3 running:2 ensure:1 quinlan:3 restrictive:1 bakiri:1 summarises:1 question:2 already:1 occurs:1 distance:1 separate:1 carbonell:1 topic:1 discriminant:1 trivial:1 reason:1 induction:6 relationship:1 difficult:1 olshen:1 webb:2 statement:5 unknown:2 perform:6 adjustable:1 neuron:10 observation:1 ever:2 dc:1 community:4 required:1 c4:20 brook:1 able:1 reading:1 challenge:1 stinchcombe:1 power:2 natural:1 force:1 nth:1 improve:2 brief:1 axis:1 carried:1 text:2 prior:4 understanding:1 interesting:1 analogy:1 validation:1 sufficient:1 principle:1 classifying:2 english:1 guide:1 allow:1 perceptron:2 fall:2 distributed:1 boundary:3 feedback:1 default:8 made:6 san:1 approximate:1 pruning:1 why:2 table:2 neurodynamics:1 learn:1 hornik:2 complex:2 artificially:1 main:2 multilayered:1 fashion:1 wiley:1 momentum:2 explicit:2 externally:1 incorporating:1 aust:1 conveniently:1 expressed:1 adjustment:1 watch:1 goal:1 ann:2 considerable:1 generalisation:16 except:2 determined:2 total:2 bradford:1 perceptrons:2 indicating:1 uneven:1 internal:2 mark:1 |
3,424 | 4,100 | Global seismic monitoring as probabilistic inference
Nimar S. Arora
Department of Computer Science
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Stuart Russell
Department of Computer Science
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Paul Kidwell
Lawrence Livermore National Lab
Livermore, CA 94550
[email protected]
Erik Sudderth
Department of Computer Science
Brown University
Providence, RI 02912
[email protected]
Abstract
The International Monitoring System (IMS) is a global network of sensors whose
purpose is to identify potential violations of the Comprehensive Nuclear-Test-Ban
Treaty (CTBT), primarily through detection and localization of seismic events.
We report on the first stage of a project to improve on the current automated
software system with a Bayesian inference system that computes the most likely
global event history given the record of local sensor data. The new system, VISA
(Vertically Integrated Seismological Analysis), is based on empirically calibrated,
generative models of event occurrence, signal propagation, and signal detection.
VISA exhibits significantly improved precision and recall compared to the current
operational system and is able to detect events that are missed even by the human
analysts who post-process the IMS output.
1
Introduction
The CTBT aims to prevent the proliferation and the advancement of nuclear weapon technology
by banning all nuclear explosions. A global network of seismic, radionuclide, hydroacoustic, and
infrasound sensors, the IMS, has been established to enforce the treaty. The IMS is the world?s
primary global-scale, continuous, real-time system for seismic event monitoring. Data from the IMS
sensors are transmitted via satellite in real time to the International Data Center (IDC) in Vienna,
where automatic event-bulletins are issued at predefined latency. Perfect performance remains well
beyond the reach of current technology: the IDC?s automated system, a highly complex and welltuned piece of software, misses nearly one third of all seismic events in the magnitude range of
interest, and about half of the reported events are spurious. A large team of expert analysts postprocesses the automatic bulletins to improve their accuracy to acceptable levels.
Like most current systems, the IDC operates by detection of arriving signals at each sensor station
(the station processing stage) and then grouping multiple detections together to form events (the
network processing stage).1 The time and location of each event are found by various search methods
including grid search [2], the double-difference algorithm [3], and the intersection method [4]. In
the words of [5], ?Seismic event location is?at its core?a minimization of the difference between
observed and predicted arrival times.? Although the mathematics of seismic event detection and
1
Network processing is thus a data association problem similar to those arising in multitarget tracking [1].
1
localization has been studied for almost 100 years [6], the IDC results indicate that the problem is
far from trivial.
There are three primary sources of difficulty: 1) the travel time between any two points on the earth
and the attenuation of various frequencies and wave types are not known accurately; 2) each detector
is subject to local noise that may mask true signals and cause false detections (as much as 90% of
all detections are false); and 3) there are many thousands of detections per day, so the combinatorial
problem of proposing and comparing possible events (subsets of detections) is daunting. These considerations suggest that an approach based on probabilistic inference and combination of evidence
might be effective, and this paper demonstrates that this is in fact the case. For example, such an
approach automatically takes into account non-detections as negative evidence for a hypothesized
event, something that classical methods cannot do.
In simple terms, let X be a random variable ranging over all possible collections of events, with
each event defined by time, location, magnitude, and type (natural or man-made). Let Y range
over all possible waveform signal recordings at all detection stations. Then P? (X) describes a
parameterized generative prior over events, and P? (Y | X) describes how the signal is propagated
and measured (including travel time, selective absorption and scattering, noise, artifacts, sensor
bias, sensor failures, etc.). Given observed recordings Y = y, we are interested in the posterior
P (X | Y = y), and perhaps in the value of X that maximizes it?i.e., the most likely explanation
for all the sensor readings. We also learn the model parameters ? and ? from historical data.
Our overall project, VISA (Vertically Integrated Seismic Analysis), is divided into two stages. The
first stage, NET-VISA, is the subject of the current paper. As the name suggests, NET-VISA deals
only with network processing and relies upon the IDC?s pre-existing signal detection algorithms.
(The second stage, SIG-VISA, will incorporate a signal waveform model and thereby subsume the
detection function.) NET-VISA computes a single most-likely explanation: a set of hypothesized
events with their associated detections, marking all other detections as noise. This input-output
specification, while not fully Bayesian in spirit, enables direct comparison to the current automated
system bulletin, SEL3. Using the final expert-generated bulletin, LEB, as ground truth, we compared
the two systems on 7 days of held-out data. NET-VISA has 16% more recall at the same precision
as SEL3, and 25% more precision at the same recall as SEL3. Furthermore, taking data from the
more comprehensive NEIC (National Event Information Center) database as ground truth for the
continental United States, we find that NET-VISA is able to detect events in the IMS data that are
not in the LEB report produced by IDC?s expert analysts; thus, NET-VISA?s true performance may
be higher than the LEB-based calculation would suggest.
The rest of the paper is structured as follows. Section 2 describes the problem in detail and covers some elementary seismology. Sections 3 and 4 describe the probability model and inference
algorithm. Section 5 presents the results of our evaluation, and Section 6 concludes.
2
The Seismic Association and Localization Problem
Seismic events are disturbances in the earth?s crust. Our work is concerned primarily with earthquakes and explosions (nuclear and conventional), but other types of events?waves breaking, trees
falling, ice falling, etc.?may generate seismic waves too. All such waves occur in a variety of types
[7]?body waves that travel through the earth?s interior and surface waves that travel on the surface.
There are two types of body waves?compression or P waves and shear or S waves. There are also
two types of surface waves?Love and Rayleigh. Further, body waves may be reflected off different
layers of the earth?s crust and these are labeled distinctly by seismologists. Each particular wave
type generated by a given event is called a phase. These waves are picked up in seismic stations
as ground vibrations. Typically, seismic stations have either a single 3-axis detector or an array
of vertical-axis detectors spread over a scale of many kilometers. Most detectors are sensitive to
nanometer-scale displacements, and so are quite susceptible to noise.
Raw seismometer measurements are run through standard signal processing software that filters out
non-seismic frequencies and computes short-term and long-term averages of the signal amplitude.
When the ratio of these averages exceeds a fixed threshold, a detection is announced. Various
parameters of the detection are measured?onset time, azimuth (direction from the station to the
source of the wave), slowness (related to the angle of declination of the signal path), amplitude, etc.
2
Based on these parameters, a phase label may be assigned to the detection based on the standard
IASPEI phase catalog [7]. All of these detection attributes may be erroneous.
The problem that we attempt to solve in this paper is to take a continuous stream of detections
(with onset time, azimuth, slowness, amplitude, and phase label) from the roughly 120 IMS seismic
stations as input and produce a continuous stream of events and associations between events and
detections. The parameters of an event are its longitude, latitude, depth, time, and magnitude (mb
or body-wave magnitude). A 3-month dataset (660 GB) has been made available by the IDC for the
purposes of this research. We have divided the dataset into 7 days of validation, 7 days of test, and
the rest as training data. We compute the accuracy of an event history hypothesis by comparison to a
chosen ground-truth history. A bipartite graph is created between predicted and true events. An edge
is added between a predicted and a true event that are at most 5 degrees in distance2 and 50 seconds
in time apart. The weight of the edge is the distance between the two events. Finally, a min-weight
max-cardinality matching is computed on the graph. We report 3 quantities from this matching?
precision (percentage of predicted events that are matched), recall (percentage of true events that
are matched), and average error (average distance in kilometers between matched events).
3
Generative Probabilistic Model
Our generative model for seismic events and detections follows along the lines of the aircraft detection model in [8, Figure 3]. In our model, there is an unknown number of seismic events with
unknown parameters (location, time, etc.). These events produce 14 different types of seismic waves
or phases. A phase from an event may or may not be detected by a station. If a phase is detected at
a station, a corresponding detection is generated. However, the parameters of the detection may be
imprecise. Additionally, an unknown number of noise detections are generated at each station. For
NET-VISA, the evidence Y = y consists only of each station?s set of detections and their parameters.
3.1
Events
The events are generated by a time-homogeneous Poisson process. If e is the set of events (of
size |e|), ?e is the rate of event generation, and T is the time period under consideration, we have
P? (|e|) =
(?e ? T )|e| exp (??e ? T )
.
|e|!
(1)
The longitude and latitude of the ith event, eil are drawn from an event location density, pl (el )
on the surface of the earth. The depth of the event, eid is uniformly distributed up to a maximum
depth D (700 km in our experiments). Similarly, the time of the event eit is uniformly distributed
between 0 and T . The magnitude of the event, eim , is drawn from what seismologists refer to as the
Gutenberg-Richter distribution, which is in fact an exponential distribution with rate ?m :
P? (ei ) = pl (eil )
1 1
?m exp ??m eim .
DT
(2)
Since all the events are exchangeable, we have
P? (e) = P? (|e|) ? |e|! ?
|e|
Y
i=1
i
P? (e ) = exp (??e ? T )
|e|
Y
i=1
pl (eil )
1
?e ?m exp ??m eim .
D
(3)
Maximum likelihood estimates of ?e and ?m may be easily determined from historical event frequencies and magnitudes. To approximate pl (el ), we use a kernel density estimate derived from the
following exponentially decaying kernel:
Kb,x (y) =
1 + 1/b2 exp (??xy /b)
.
2?R2 1 + exp (??/b)
(4)
2
In this paper, by distance between two points on the surface of the earth we refer to the great-circle distance.
This can be represented in degrees, radians, or kilometers (using the average earth radius of 6371 km).
3
Figure 1: Heat map (large values in red, small in blue) of the prior event location density log pl (el ).
Here b > 0 is the bandwidth, ?xy is the distance (in radians) between locations x and y on the
surface of the earth, and R is the earth?s radius. The bandwidth was estimated via cross-validation.
In addition, we additively mixed this kernel density with a uniform distribution, with prior probability 0.001, to allow the possibility of explosions at an arbitrary location. The overall density, as
illustrated in Figure 1, was pre-computed on a one degree grid and interpolated during inference.
3.2
Correct Detections
The probability that an event?s j th phase, 1 ? j ? J, is detected by a station k, 1 ? k ? K,
depends on the wave type or phase, the station, and the event?s magnitude, depth, and distance to the
station. Let dijk be a binary indicator variable for such a detection of event i, and ?ik the distance
between event i and station k. Then we have
i
i
P? (dijk = 1 | ei ) = pjk
d (em , ed , ?ik ).
(5)
If an event phase is detected at a station, i.e. dijk = 1, our model specifies probability distribution
for the attributes of that detection, aijk . The arrival time, aijk
t , is assigned a Laplacian distribution whose mean consists of two parts. The first is the IASPEI travel time prediction for that phase,
which depends only on the event depth and the distance between the event and station. The second is
a learned station-specific correction which accounts for inhomogeneities in the earth?s crust, which
allow seismic waves to travel faster or slower than the IASPEI prediction. The station-specific correction also accounts for any systematic biases in picking seismic onsets from waveforms. Let ?jk
t
be the location of this Laplacian (a function of the event time, depth, and distance to the station)
and let bjk
t be its scale. Truncating this Laplacian to the range of possible arrival times produces a
normalization constant Ztjk , so that
!
ijk
jk i i
1
|a
?
?
(e
,
e
,
?
)|
ik
t
t d
P? (aijk
| dijk = 1, ei ) = jk exp ? t
.
(6)
t
Zt
bjk
t
Similarly, the arrival azimuth and slowness follow a Laplacian distribution. The location aijk
z of the
arrival azimuth depends only on the location of the event, while the location aijk
s of the arrival slowness depends only on the event depth and distance to the station. The scales of all these Laplacians
are fixed for a given phase and station, so that
jk i
1
|aijk
z ? ?z (el )|
ijk
i
exp
?
,
(7)
P? (aijk
|
d
=
1,
e
)
=
z
Zzjk
bjk
z
jk i
1
|aijk
s ? ?s (ed , ?ik )|
ijk
ijk
i
P? (as | d = 1, e ) = jk exp ?
.
(8)
Zs
bjk
s
4
The arrival amplitud aijk
a is similar to the detection probability in that it depends only on the event
magnitude, depth, and distance to the station. We model the log of the amplitude via a linear regression model with Gaussian noise:
!
jk i
i
2
1
(log(aijk
a ) ? ?a (em , ed , ?ik ))
ijk
ijk
i
P? (aa | d = 1, e ) = ?
exp ?
.
(9)
2
2??ajk
2?ajk
Finally, the phase label aijk
h automatically assigned to the detection follows a multinomial distribution whose parameters depends on the true phase, j:
ijk
ijk
P? (aijk
= 1, ei ) = pjk
h |d
h (ah ).
(10)
The phase- and station-specific detection distributions, pjk
d (?), were obtained using logistic regression models estimated via a hierarchical Bayesian procedure [9]. Because phase labels indicate among other things the general physical path taken from an event to a station, a distinct set
of features were learned from the event characteristics for each phase. To estimate the individual station weights ?wjk for each phase j and feature w, a hierarchical model was specified in
which each station-specific weight is independently drawn from a feature-dependent global Normal
2
). Weakly informative diffuse priors ?wj ? N (0, 1002 ),
distribution, so that ?wjk ? N (?wj , ?wj
?2
?wj ? Gamma(0.01, 0.01), were placed on the parameters of these global distributions, and posterior mean estimates of the station-specific weights obtained via Gibbs sampling. Figure 2 shows
two of the empirical and modeled distributions for one phase-site.
Detection probability at station 6 for P phase, surface event
1.0
0.10
Time Residuals around IASPEI prediction for P phase at station 6
model 3.5 mb
data 3?4 mb
model
data
0.6
0.06
Probability
0.08
Probability
0.8
0.4
0.04
0.2
0.02
0.0
0
20
40
60
80
100
Distance (deg)
120
140
160
0.00
180
?6
?4
?2
0
Time
2
4
6
Figure 2: Conditional detection probabilities and arrival time distributions (relative to the IASPEI
prediction) for the P phase at Station 6.
3.3
False Detections
Each station, k, also generates a set of false detections f k through a time-homogeneous Poisson
process with rate ?kf :
k
(?kf ? T )|f | exp ??kf ? T
.
(11)
P? (|f k |) =
|f k |!
The time ftkl , azimuth fzkl , and slowness fskl of these false detections are generated uniformly over
their respective ranges. The amplitude fakl of the false detection is generated from a mixture of two
Gaussians, pka (fakl ). Finally, the phase label fhkl assigned to the false detection follows a multinomial
distribution, pkh (fhkl ). If the azimuth and slowness take values on ranges of length Mz and Ms ,
respectively, then the probability of the lth false detection is given by
1 1 1 k kl k kl
P? (f kl ) =
p (f )p (f ) .
(12)
T M z Ms a a h h
Since the false detections at a station are exchangeable, we have
k
l=|f k |
k
k
k
P? (f ) = P? (|f |) ? |f |!
Y
kl
P? (f ) = exp
l=1
5
??kf
?T
l=|f |
Y
l=1
?kf
pk (f kl )pk (f kl ) . (13)
Mz Ms a a h h
4
Inference
Combining the model components developed in the preceding section, the overall probability of any
hypothesized sequence of events e, detected event phases d, arrival attributes a for correctly detected
event phases, and arrival attributes f for falsely detected events is
P (e, d, a, f ) = P? (e)P? (d | e)P? (a | d, e)P? (f ).
(14)
We will attempt to find the most likely explanation consistent with the observations. This involves
determining e, d, a, and f which maximize P (e, d, a, f ), such that the set of detections implied
by d, a, and f correspond exactly with the observed detections. Since detections from real seismic
sensors are observed incrementally and roughly in time-ascending order, our inference algorithm
also produces an incremental hypothesis which advances with time. Our algorithm can be seen as a
form of greedy search, in which the current hypothesis is improved via a set of local moves.
Let MT denote the maximum travel time for any phase. Initially, we start with an event-window
of size W from t0 = 0 to t1 = W , and a detection-window of size W + MT from t0 = 0 to
t1 = W + MT . Our starting hypothesis is that all detections in our detection-window are false
detections and there are no events. We then repeatedly apply the birth, death, improve-event, and
improve-detection moves (described below) for a fixed number of iterations (N times the number of
detections in that window) before shifting the windows forward by a step size S. Any new detections
added to the detection window are again assumed to be false detections. As the windows move
forward the events older than t0 ? MT become stable: none of the moves modify either the event or
detections associated with them. These events are then output. While in theory this algorithm never
needs to terminate, our experiments continue until the test dataset is fully consumed.
In order to simplify the computations needed to compare alternate hypotheses, we decompose the
overall probability of Eq. (14) into the contribution from each event. We define the score Se of an
event as the probability ratio of two hypotheses: one in which the event exists, and another in which
the event doesn?t exist and all of its associated detections are noise. If an event has score less than 1,
an alternative hypothesis in which the event is deleted clearly has higher probability. Critically, this
event score is unaffected by other events in the current hypothesis. From Eqs. (3) and (13) we have
?
?
Y
i
ijk
ijk
i
p
(e
)?
?
P
(a
|
d
,
e
)
e
m
l l
?
?.
Se (ei ) =
exp ??m eim
P? (dijk | ei ) ??(dijk, 0) + ?(dijk, 1) k
?f
D
k (f kl )pk (f kl )
j,k
p
h h
Mz Ms a a
Note that the final fraction is a likelihood ratio comparing interpretations of the same detection as
either the detection of event i?s j th phase at station k, or the lth false detection at station k. We
can further decompose the score into scores Sd for each detection. The score of dijk , defined when
dijk = 1, is the ratio of the probabilities of the hypothesis where the detection is associated with
phase j of event i at station k, and one in which this detection is false and phase j of event i is
missed by station k:
Sd (dijk ) =
i
i
pjk
d (em , ed , ?ik )
i
i
1 ? pjk
d (em , ed , ?ik )
P? (aijk | dijk , ei )
?k
f
kl
k
kl k
Mz Ms pa (fa )ph (fh )
.
(15)
By definition, any detection with score less than 1 is more likely to be a false detection. Also, the
score of an individual detection is independent of other detections and unassociated events in the
hypothesis. These scores play a key role in the following local search moves.
Birth Move We randomly pick a detection, invert it into an event location (using the detection?s
time, azimuth, and slowness), and sample an event in a 10 degree by 100 second ball around this
inverted location. The depth of the event is fixed at 0, and the magnitude is uniformly sampled.
Improve Detections Move For each detection in the detection window, we consider all possible
phases j of all events i up to MT seconds earlier. We then associate the best event-phase for this
detection that is not already assigned to a detection with higher score at the same station k. If this
best event-phase has score Sd (dijk ) < 1, the detection is changed to a false detection.
Improve Events Move For each event ei , we consider 10 points chosen uniformly at random in a
small ball around the event (2 degrees in longitude and latitude, 100 km in depth, 5 seconds in time,
and 2 units of magnitude), and choose those attributes with the highest score Se (ei ).
6
Precision-Recall curve with LEB as ground truth
1.0
SEL3
SEL3 extrapolation
NET-VISA
0.9
recall
0.8
0.7
0.6
0.5
0.4
0.4
0.5
0.6
0.7
precision
0.8
0.9
1.0
Figure 3: Precision-recall performance of the proposed NET-VISA and deployed SEL3 algorithms,
treating the analyst-generated LEB as ground truth.
Death Move Any event ei with score Se (ei ) < 1 is deleted, and all of its currently associated
detections are marked as false alarms.
Final Pruning Before outputting event hypotheses, we perform a final round of pruning to remove
some duplicate events. In particular, we delete any event for which there is another higher-scoring
event within 5 degrees distance and 50 seconds time. Such spurious, or shadow, event hypotheses
arise because real seismic events generate many more phases than we currently model. In addition,
a single phase may sometimes generate multiple detections due to waveform processing, or ?pick?,
errors. These additional unmodeled detections, when taken together, often suggest an additional
event at about the same location and time as the original event.
Note that the birth move is not a greedy move: the proposed event will almost always have a score
Se (ei ) < 1 until some number of detections are assigned in subsequent moves. The overall structure
of these moves could be easily converted to an MCMC or simulated annealing algorithm. However,
in our experiments this search outperformed simple MCMC methods in terms of speed and accuracy.
5
Experimental Results
As discussed in Section 2, we measure the precision, recall, and average error of our predictions via
an assumed ground truth. We first treat the IMS analyst-generated LEB as ground truth, and compare the performance of our NET-VISA algorithm to the currently deployed SEL3 system. Using
the scores for hypothesized events, we have generated a precision-recall curve for NET-VISA, and
marked SEL3 on it as a point (see Figure 3). Also in this figure, we show a precision-recall curve
for SEL3 using scores from an SVM trained to classify true and false SEL3 events [10] (SEL3 extrapolation). As shown in the figure, NET-VISA has at least 16% more recall at the same precision
as SEL3, and at least 25% more precision at the same recall as SEL3.
The true precision of NET-VISA is perhaps higher than this comparison suggests. We have evaluated
the recall of LEB and NET-VISA with the NEIC dataset as ground truth. Since the NEIC has many
more sensors in the United States than the IMS, it is considered a more reliable summary of seismic
activity in this region. Out of 33 events in the continental United States, LEB found 4, and NETVISA found 8 including the 4 found by LEB.
Figure 4 shows the recall and error divided among different types of LEB events. The table on
the left shows a break-down by LEB event magnitude. For magnitudes up to 4, NET-VISA has
nearly 20% higher recall with similar error. The table on the right shows a break-down by azimuth
7
mb
0?2
2?3
3?4
>4
all
Count
74
36
558
164
832
SEL3
Recall
Err
64.9 101
50.0 186
66.5 104
86.6
70
69.7
99
NET-VISA
Recall
Err
85.1
91
75.0 171
85.1 109
93.3
80
86.3 103
Azimuth
Gap
0 ? 90
90 ? 180
180 ? 270
270 ? 360
all
Count
72
315
302
143
832
SEL3
Recall
Err
100.0
28
88.9
76
51.0 134
51.0 176
69.7
99
NET-VISA
Recall
Err
100.0
38
93.7
72
82.1 126
72.0 187
86.3 103
Figure 4: Recall and error (km) broken down by LEB event magnitude and azimuth gap (degrees).
gap, defined as the largest difference in consecutive event-to-station azimuths for stations which
detect an event. Large gaps indicate that the event location is under-constrained. For example, if all
stations are to the southwest of an event, the gap is greater than 270 degrees and the event will be
poorly localized along a line running from southwest to northeast. By using evidence about missed
detections ignored by SEL3, NET-VISA reduces this uncertainty and performs much better.
All of the results in this section were produced using 7 days of data from the validation set. The
inference used a window size, W , of 30 minutes, a step size, S, of 15 minutes, and N = 1000
iterations. There were a total of 832 LEB events during this period, and roughly 120,000 detections.
The inference took about 4.5 days on a single core running at 2.5 GHz. Estimating model parameters
from 2.5 months of training data took about 1 hour.
6
Conclusions and Further Work
Our results demonstrate that a Bayesian approach to seismic monitoring can improve significantly
on the performance of classical systems. The NET-VISA system can not only reduce the human
analyst effort required to achieve a given level of accuracy, but can also lower the magnitude threshold for reliable detection. Given that the difficulty of seismic monitoring was cited as one of the
principal reasons for non-ratification of the CTBT by the United States Senate in 1999, one hopes
that improvements in monitoring may increase the chances of final ratification and entry into force.
Putting monitoring onto a sound probabilistic footing also facilitates further improvements such as
continuous estimation of local noise conditions, travel time, and attenuation models without the need
for ground-truth calibration experiments (controlled explosions). We also expect to lower the detection threshold significantly by extending the generative model to include waveform characteristics,
so that detection becomes part of a globally integrated inference process?and hence susceptible to
top-down influences?rather than being a purely local, bottom-up, hard-threshold decision.
Acknowledgments
We would like to thank the many seismologists who patiently explained to us the intricacies of their
field, among them Ronan LeBras, Robert Engdahl, David Bowers, Bob Pearce, Stephen Myers,
Dmitry Storchak, Istvan Bondar, and Barbara Romanowicz. We also received assistance from several Berkeley undergraduates, including Matthew Cann, Hong Hu, Christopher Lin, and Andrew
Lee. The third author?s work was performed under the auspices of the U.S. Department of Energy
at Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The other authors were partially supported by the Preparatory Commission for the CTBT. Finally, the first author
wishes to thank his family for their infinite patience and support.
References
[1] Y. Bar-Shalom and T.E. Fortmann. Tracking and Data Association. Academic Press, 1988.
[2] P. M. Shearer. Improving local earthquake location using the L1 norm and waveform cross
correlation: Application to the Whittier Narrows, California, aftershock sequence. J. Geophys.
Res., 102:8269 ? 8283, 1997.
[3] F. Waldhauser and W. L. Ellsworth. A double-difference earthquake location algorithm:
method and application to the Northern Hayward Fault, California. Bulletin of the Seismological Society of America, 90:1353 ? 1368, 2000.
8
[4] J. Pujol. Earthquake location tutorial: graphical approach and approximate epicentral location
techniques. Seis. Res. Letter, 75:63 ? 74, 2004.
[5] Stephen C. Myers, Gardar Johannesson, and William Hanley. A Bayesian hierarchical method
for multiple-event seismic location. Geophysical Journal International, 171:1049?1063, 2009.
[6] L. Geiger. Probability method for the determination of earthquake epicenters from the arrival
time only. Bull. St. Louis Univ., 8:60 ?71, 1912.
[7] D. A. Storchak, J. Schweitzer, and P. Bormann. The IASPEI standard seismic phase list.
Seismol. Res. Lett., 74(6):761 ? 772, 2003.
[8] Brian Milch, Bhaskara Marthi, Stuart J. Russell, David Sontag, Daniel L. Ong, and Andrey
Kolobov. BLOG: Probabilistic models with unknown objects. In IJCAI, pages 1352?1359,
2005.
[9] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman &
Hall, 2004.
[10] Lester Mackey, Ariel Kleiner, and Michael I. Jordan. Improved automated seismic event extraction using machine learning. Eos Trans. AGU, 90(52), 2009. Fall Meet. Suppl., Abstract
S31B-1714.
9
| 4100 |@word aircraft:1 compression:1 norm:1 km:4 additively:1 hu:1 pick:2 thereby:1 score:16 united:4 daniel:1 existing:1 err:4 current:8 comparing:2 ronan:1 subsequent:1 informative:1 enables:1 remove:1 treating:1 mackey:1 generative:5 half:1 advancement:1 greedy:2 ith:1 core:2 short:1 record:1 footing:1 location:21 along:2 schweitzer:1 direct:1 become:1 ik:7 consists:2 falsely:1 mask:1 roughly:3 preparatory:1 love:1 proliferation:1 globally:1 eil:3 automatically:2 gov:1 window:9 cardinality:1 becomes:1 project:2 estimating:1 matched:3 hayward:1 maximizes:1 what:1 z:1 developed:1 proposing:1 berkeley:7 attenuation:2 exactly:1 demonstrates:1 lester:1 exchangeable:2 unit:1 crust:3 louis:1 before:2 ice:1 t1:2 local:7 treat:1 vertically:2 modify:1 sd:3 meet:1 path:2 might:1 studied:1 suggests:2 range:5 bjk:4 acknowledgment:1 earthquake:5 richter:1 procedure:1 displacement:1 empirical:1 significantly:3 matching:2 seismological:2 word:1 pre:2 imprecise:1 suggest:3 cannot:1 interior:1 onto:1 gelman:1 milch:1 influence:1 conventional:1 map:1 center:2 leb:13 starting:1 truncating:1 independently:1 array:1 nuclear:4 his:1 play:1 homogeneous:2 hypothesis:12 sig:1 pa:1 associate:1 jk:7 database:1 labeled:1 observed:4 role:1 bottom:1 thousand:1 wj:4 region:1 russell:3 mz:4 highest:1 broken:1 ong:1 trained:1 weakly:1 purely:1 localization:3 upon:1 bipartite:1 treaty:2 easily:2 eit:1 various:3 represented:1 america:1 seis:1 univ:1 heat:1 distinct:1 effective:1 describe:1 detected:7 birth:3 eos:1 whose:3 quite:1 solve:1 inhomogeneity:1 final:5 sequence:2 myers:2 net:19 took:2 outputting:1 mb:4 combining:1 pka:1 poorly:1 achieve:1 wjk:2 ijcai:1 double:2 satellite:1 extending:1 produce:4 perfect:1 seismology:1 incremental:1 object:1 andrew:1 measured:2 kolobov:1 received:1 eq:2 longitude:3 c:3 predicted:4 indicate:3 involves:1 shadow:1 direction:1 waveform:6 radius:2 correct:1 attribute:5 filter:1 kb:1 human:2 southwest:2 pjk:5 decompose:2 brian:1 absorption:1 elementary:1 pl:5 correction:2 around:3 considered:1 ground:10 normal:1 exp:13 great:1 lawrence:2 hall:1 matthew:1 consecutive:1 fh:1 purpose:2 earth:10 estimation:1 travel:8 outperformed:1 combinatorial:1 label:5 currently:3 sensitive:1 vibration:1 largest:1 minimization:1 hope:1 clearly:1 sensor:10 gaussian:1 always:1 aim:1 rather:1 derived:1 improvement:2 likelihood:2 detect:3 inference:10 dependent:1 el:4 integrated:3 typically:1 initially:1 spurious:2 selective:1 interested:1 overall:5 among:3 constrained:1 field:1 never:1 extraction:1 sampling:1 chapman:1 stuart:2 nearly:2 report:3 simplify:1 duplicate:1 primarily:2 randomly:1 gamma:1 national:3 comprehensive:2 individual:2 phase:35 william:1 attempt:2 detection:84 interest:1 highly:1 possibility:1 evaluation:1 violation:1 mixture:1 held:1 predefined:1 edge:2 explosion:4 xy:2 respective:1 tree:1 circle:1 re:3 delete:1 classify:1 earlier:1 cover:1 bull:1 subset:1 entry:1 uniform:1 northeast:1 azimuth:11 too:1 gutenberg:1 reported:1 commission:1 providence:1 ac52:1 calibrated:1 andrey:1 st:1 density:5 international:3 cited:1 multitarget:1 probabilistic:5 off:1 systematic:1 lee:1 picking:1 contract:1 together:2 michael:1 again:1 choose:1 expert:3 account:3 potential:1 converted:1 de:1 b2:1 onset:3 stream:2 piece:1 depends:6 break:2 picked:1 lab:1 extrapolation:2 performed:1 red:1 wave:18 decaying:1 start:1 contribution:1 accuracy:4 who:2 characteristic:2 correspond:1 identify:1 bayesian:6 raw:1 accurately:1 produced:2 critically:1 none:1 monitoring:7 bob:1 unaffected:1 history:3 ah:1 detector:4 reach:1 ed:5 definition:1 failure:1 energy:1 frequency:3 associated:5 propagated:1 radian:2 sampled:1 dataset:4 recall:20 amplitude:5 scattering:1 higher:6 dt:1 day:6 follow:1 reflected:1 improved:3 daunting:1 evaluated:1 furthermore:1 stage:6 until:2 correlation:1 ei:12 christopher:1 propagation:1 incrementally:1 logistic:1 artifact:1 perhaps:2 name:1 hypothesized:4 brown:2 true:8 hence:1 assigned:6 death:2 laboratory:1 illustrated:1 deal:1 round:1 assistance:1 during:2 m:5 hong:1 demonstrate:1 performs:1 llnl:1 l1:1 ranging:1 consideration:2 shear:1 multinomial:2 empirically:1 physical:1 mt:5 exponentially:1 association:4 interpretation:1 discussed:1 ims:9 measurement:1 refer:2 gibbs:1 automatic:2 grid:2 mathematics:1 similarly:2 specification:1 stable:1 calibration:1 surface:7 etc:4 aijk:13 something:1 posterior:2 shalom:1 apart:1 barbara:1 issued:1 slowness:7 binary:1 continue:1 blog:1 fault:1 scoring:1 inverted:1 transmitted:1 seen:1 additional:2 greater:1 preceding:1 maximize:1 period:2 signal:11 stephen:2 multiple:3 sound:1 reduces:1 exceeds:1 faster:1 academic:1 calculation:1 cross:2 long:1 lin:1 determination:1 divided:3 post:1 laplacian:4 controlled:1 prediction:5 regression:2 idc:7 poisson:2 iteration:2 kernel:3 normalization:1 sometimes:1 suppl:1 invert:1 addition:2 annealing:1 sudderth:2 source:2 weapon:1 rest:2 subject:2 recording:2 facilitates:1 thing:1 spirit:1 jordan:1 concerned:1 automated:4 variety:1 carlin:1 declination:1 bandwidth:2 reduce:1 consumed:1 t0:3 gb:1 effort:1 sontag:1 distance2:1 cause:1 repeatedly:1 ignored:1 latency:1 se:5 ph:1 generate:3 specifies:1 percentage:2 exist:1 northern:1 tutorial:1 eim:4 estimated:2 arising:1 per:1 correctly:1 blue:1 key:1 putting:1 threshold:4 falling:2 drawn:3 deleted:2 prevent:1 graph:2 fraction:1 year:1 run:1 angle:1 parameterized:1 uncertainty:1 letter:1 almost:2 family:1 missed:3 geiger:1 decision:1 acceptable:1 patience:1 announced:1 layer:1 activity:1 occur:1 ri:1 software:3 diffuse:1 auspex:1 nanometer:1 interpolated:1 generates:1 speed:1 min:1 department:4 marking:1 structured:1 alternate:1 combination:1 ball:2 describes:3 em:4 visa:23 explained:1 ariel:1 taken:2 remains:1 count:2 needed:1 ascending:1 available:1 gaussians:1 apply:1 hierarchical:3 enforce:1 occurrence:1 alternative:1 slower:1 original:1 top:1 running:2 include:1 graphical:1 vienna:1 hanley:1 classical:2 society:1 implied:1 move:13 added:2 quantity:1 already:1 fa:1 primary:2 istvan:1 exhibit:1 continental:2 distance:13 eid:1 thank:2 simulated:1 trivial:1 reason:1 analyst:6 erik:1 length:1 modeled:1 cann:1 ratio:4 susceptible:2 robert:1 negative:1 zt:1 stern:1 unknown:4 seismic:28 perform:1 vertical:1 observation:1 pearce:1 subsume:1 fortmann:1 team:1 station:41 arbitrary:1 unmodeled:1 david:2 required:1 livermore:3 specified:1 kl:10 catalog:1 california:4 marthi:1 learned:2 narrow:1 established:1 hour:1 trans:1 able:2 beyond:1 bar:1 below:1 latitude:3 laplacians:1 reading:1 pujol:1 including:4 max:1 explanation:3 reliable:2 shifting:1 geophys:1 event:118 difficulty:2 natural:1 disturbance:1 force:1 indicator:1 residual:1 senate:1 older:1 improve:7 technology:2 arora:1 axis:2 concludes:1 created:1 dijk:12 prior:4 kf:5 determining:1 relative:1 fully:2 expect:1 mixed:1 generation:1 localized:1 validation:3 degree:8 consistent:1 rubin:1 changed:1 ban:1 placed:1 summary:1 supported:1 arriving:1 bias:2 allow:2 fall:1 bulletin:5 taking:1 distinctly:1 distributed:2 ghz:1 curve:3 depth:10 lett:1 world:1 computes:3 doesn:1 forward:2 collection:1 made:2 author:3 historical:2 far:1 approximate:2 pruning:2 dmitry:1 deg:1 global:7 assumed:2 continuous:4 search:5 kleiner:1 kilometer:3 table:2 additionally:1 learn:1 terminate:1 ca:3 operational:1 improving:1 complex:1 pk:3 spread:1 noise:8 paul:1 arrival:11 alarm:1 arise:1 body:4 site:1 deployed:2 precision:13 wish:1 exponential:1 bower:1 breaking:1 third:2 bhaskara:1 down:4 minute:2 erroneous:1 specific:5 r2:1 list:1 svm:1 evidence:4 grouping:1 exists:1 undergraduate:1 false:17 magnitude:14 johannesson:1 gap:5 intersection:1 rayleigh:1 intricacy:1 likely:5 tracking:2 partially:1 aa:1 truth:9 chance:1 relies:1 conditional:1 lth:2 month:2 marked:2 man:1 ajk:2 hard:1 determined:1 infinite:1 operates:1 uniformly:5 miss:1 principal:1 called:1 total:1 experimental:1 geophysical:1 ijk:10 support:1 incorporate:1 mcmc:2 |
3,425 | 4,101 | Towards Property-Based Classification of Clustering
Paradigms
Margareta Ackerman, Shai Ben-David, and David Loker
D.R.C. School of Computer Science
University of Waterloo, Canada
{mackerma, shai, dloker}@cs.uwaterloo.ca
Abstract
Clustering is a basic data mining task with a wide variety of applications. Not
surprisingly, there exist many clustering algorithms. However, clustering is an ill
defined problem - given a data set, it is not clear what a ?correct? clustering for
that set is. Indeed, different algorithms may yield dramatically different outputs
for the same input sets. Faced with a concrete clustering task, a user needs to
choose an appropriate clustering algorithm. Currently, such decisions are often
made in a very ad hoc, if not completely random, manner. Given the crucial effect
of the choice of a clustering algorithm on the resulting clustering, this state of
affairs is truly regrettable. In this paper we address the major research challenge
of developing tools for helping users make more informed decisions when they
come to pick a clustering tool for their data. This is, of course, a very ambitious
endeavor, and in this paper, we make some first steps towards this goal. We propose to address this problem by distilling abstract properties of the input-output
behavior of different clustering paradigms.
In this paper, we demonstrate how abstract, intuitive properties of clustering functions can be used to taxonomize a set of popular clustering algorithmic paradigms.
On top of addressing deterministic clustering algorithms, we also propose similar
properties for randomized algorithms and use them to highlight functional differences between different common implementations of k-means clustering. We also
study relationships between the properties, independent of any particular algorithm. In particular, we strengthen Kleinberg?s famous impossibility result, while
providing a simpler proof.
1
Introduction
In spite of the wide use of clustering in many practical applications, currently, there exists no principled method to guide the selection of a clustering algorithm. Of course, users are aware of the costs
involved in employing different clustering algorithms (software purchasing costs, running times,
memory requirements, needs for data preprocessing etc.) but there is very little understanding of
the differences in the outcomes that these algorithms may produce. We focus on that aspect - the
input-output properties of different clustering algorithms.
The choice of an appropriate clustering should, of course, be task dependent. A clustering that
works well for one task may be unsuitable for another. Even more than for supervised learning, for
clustering, the choice of an algorithm must incorporate domain knowledge. While some domain
knowledge is embedded in the choice of similarity between domain elements (or the embedding of
these elements into some Euclidean space), there is still a large variance in the behavior of difference
clustering paradigms over a fixed similarity measure.
1
For some clustering tasks, there is a natural clustering objective function that one may wish to optimize (like k-means for vector quantization coding tasks), but very often the task does not readily
translate into a corresponding objective function. Often users are merely searching for a meaningful
clustering, without a prior preference for any specific objective function. Many (if not most) common clustering paradigms do not optimize any clearly defined objective utility, either because no
such objective is defined (like in the case of, say, single linkage clustering) or because optimizing
the most relevant objective is computationally infeasible. To overcome computation infeasibility,
the algorithms end up carrying out a heuristic whose outcome may be quite different than the actual
objective-based optimum (that is the case with the k-means algorithm as well as with spectral clustering algorithms). What seems to be missing is a clear understanding of the differences in clustering
outputs in terms of intuitive and usable properties.
We propose a different approach to providing guidance to clustering users by identifying significant properties of clustering functions that, on one hand distinguish between different clustering
paradigms, and on the other hand are intended to be relevant to the domain knowledge that a user
might have access to. Based on domain expertise users could then choose which properties they
want an algorithm to satisfy, and determine which algorithms meet their requirements.
Our vision is that ultimately, there would be a sufficiently rich set of properties that would provide a
detailed, property-based, taxonomy of clustering methods, that could, in turn, be used as guidelines
for a wide variety of clustering applications. This is a very ambitious enterprize, but that should
not deter researchers from addressing it. This paper takes a step towards that goal by using natural
properties to examine some popular clustering approaches.
We present a taxonomy for common deterministic clustering functions with respect to the properties that we propose. We also show how to extend this framework to the randomized clustering
algorithms, and use these properties to distinguish between two k-means heuristics.
We also study relationships between the properties, independent of any particular algorithm. In particular, we strengthen Kleinberg?s impossibility result[8] using a relaxation of one of the properties
that he proposed.
1.1
Previous work
Our work follows a theoretical study of clustering that began with Kleinberg?s impossibility result
[8], in which he proposes three candidate axioms of clustering and shows that no clustering function
can simultaneously satisfy these three axioms. Ackerman and Ben-David [1] subsequently showed
these axioms to be consistent in the setting of clustering quality measures. [1] also proposes to
make a distinction between clustering ?axioms? and clustering ?properties?, where the axioms are
the features that define which partitionings are worthy of the name ?clustering?, and the properties
vary between different clustering paradigms and may be used to construct a taxonomy of clustering
algorithms. We adopt that approach here.
There are previous results that provide some property based characterizations of a specific clustering algorithm. In 1975, Jardine and Sibson [6] gave a characterization of single linkage. Last
year, Bosagh Zadeh and Ben-David [3] characterize single-linkage within Kleinberg?s framework
of clustering functions using a special invariance property (?path distance coherence?). Very recently, Ackerman, Ben-David and Loker provided a characterization of the family of linkage-based
clustering in terms of a few natural properties [2].
Some heuristics have been proposed as a means of distinguishing between the output of clustering
algorithms on specific data. These approaches require running the algorithms, and then selecting
an algorithm based on the outputs that they produce. In particular, validity criteria can be used
to evaluate the output of clustering algorithms. These measures can be used to select a clustering
algorithm by choosing the one that yields the highest quality clustering [10]. However, the result
only applies to the original data, and there are no guarantees on the quality of the output of these
algorithms on any other data.
2
2
Definitions and Formal Framework
Clustering is wide and heterogenous domain. For most of this paper, we focus on a basic subdomain where the (only) input to the clustering function is a finite set of points endowed with a
between-points distance (or similarity) function, and the output is a partition of that domain.
A distance function is a symmetric function d : X ? X ? R+ , such that d(x, x) = 0 for all x ? X.
The data sets that we consider are pairs (X, d), where X is some finite domain set and d is a distance
function over X. These are the inputs for clustering functions.
A k-clustering
C = {C1 , C2 , . . . , Ck } of a data set X is a partition of X into k disjoint subsets of
[
X (so,
Ci = X). A clustering of X is a k-clustering of X for some 1 ? k ? |X|.
i
For a clustering C, let |C| denote the number of clusters in C and |Ci | denote the number of points
in a cluster Ci . For x, y ? X and a clustering C of X, we write x ?C y if x and y belong to the
same cluster in C and x 6?C y, otherwise.
We say that (X, d) and (X 0 , d0 ) are isomorphic data sets, denoting it by (X, d) ? (X 0 , d0 ), if there
exists a bijection ? : X ? X 0 so that d(x, y) = d0 (?(x), ?(y)) for all x, y ? X.
We say that two clusterings (or partitions) C = (c1 , . . . ck ) of some domain (X, d) and C 0 =
(c01 , . . . c0k ) of some domain (X 0 , d0 ) are isomorphic clusterings, denoted (C, d) ?
= (C 0 , d0 ), if there
exists a bijection ? : X ? X 0 such that for all x, y ? X, d(x, y) = d0 (?(x), ?(y)) and, on top of
that, x ?C y if and only if ?(x) ?C 0 ?(y). Note that this notion depends on both the underlying
distance functions and the clusterings.
We consider two definitions of a clustering function.
Definition 1 (General clustering function). A general clustering function is a function that takes as
input a pair (X, d) and outputs a clustering of the domain X.
The second type are clustering functions that require that the number of clusters be provided as part
of the input.
Definition 2 (k-clustering function). A k-clustering function is a function that takes as input a pair
(X, d) and a parameter 1 ? k ? |X| and outputs a k-clustering of the domain X.
2.1
Properties of Clustering Functions
A key component in our approach are properties of clustering functions that address the input-output
behavior of these functions. The properties are formulated for k-clustering functions. However,
all the properties, with the exception of locality1 and refinement-confined, apply also for general
clustering functions.
Isomorphism invariance: The following invariance property, proposed in [2] under the name ?representation independence?, seems to be an essential part of our understanding of what clustering is.
It requires that the output of a k-clustering function is independent of the labels of the data points.
A k-clustering function F is isomorphism invariant if whenever (X, d) ? (X 0 , d0 ), then, for every
k, F (X, d, k) and F (X 0 , d0 , k) are isomorphic clusterings.
Scale invariance: Scale invariance, proposed by Kleinberg [8], requires that the output of a clustering be invariant to uniform scaling of the data. A k-clustering function F is scale invariant if
for any data sets (X, d) and (X, d0 ), if there exists a real number c > 0 so that for all x, y ? X,
d(x, y) = cd?0 (x, y) then for every 1 ? k ? |X|, F (X, d, k) = F (X, d0 , k).
Order invariance: Order invariance, proposed by Jardine and Sibson[6], describes clustering functions that are based on the ordering of pairwise distances. A distance function d0 of X is an order
invariant modification of d over X if for all x1 , x2 , x3 , x4 ? X, d(x1 , x2 ) < d(x3 , x4 ) if and only
if d0 (x1 , x2 ) < d0 (x3 , x4 ). A k-clustering function F is order invariant if whenever a distance
function d0 over X is an order invariant modification of d, F (X, d, k) = F (X, d0 , k) for all k.
1
Locality can also be reformulated for general clustering functions, however, we do not discuss this in this
work.
3
Locality: Intuitively, a k-clustering function is local if its behavior on a union of clusters depends
only on distances between elements of that union, and is independent of the rest of the domain set.
Locality was proposed in [2]. A k-clustering
S function F is local if for any clustering C output by F
and every subset of clusters, C 0 ? C, F ( C 0 , d, |C 0 |) = C 0 .
In other words, for every domain (X, d) and number of clusters, k, if X 0 is the union of k 0 clusters
in F (X, d, k) for some k 0 ? k, then, applying F to (X 0 , d) and asking for a k 0 -clustering, will yield
the same clusters that we started with.
Consistency: Consistency, proposed by Kleinberg [8], aims to formalize the preference for clusters
that are dense and well-separated. This property requires that the output of a k-clustering function
should remain unchanged after shrinking within-cluster distances and stretching between-cluster
distances.
Given a clustering C of some domain (X, d), we say that a distance function d0 over X, is (C, d)consistent if d0X (x, y) ? dX (x, y) whenever x ?C y, and d0X (x, y) ? dX (x, y) whenever x 6?C y.
A k-clustering function F is consistent if for every X, d, k, if d0 is (F (X, d, k), d)-consistent then
F (X, d, k) = F (X, d0 , k).
While this property may sound desirable and natural, it turns out that many common clustering
paradigms fail to satisfy it. In a sense, this property may be viewed as the main weakness of Kleinberg?s impossibility result.
The following two properties, proposed in [2], are straightforward relaxations of consistency.
Inner and Outer consistency: Outer consistency represents the preference for well separated clusters, by requiring that the output of a k-clustering function not change if clusters are moved away
from each other.
A distance function d0 over X is (C, d)-outer consistent if d0X (x, y) = dX (x, y) whenever x ?C
y, and d0X (x, y) ? dX (x, y) whenever x 6?C y. Outer consistency is defined in the same way
consistency, except that (C, d)-consistent is replaced by (C, d)-outer consistent.
Inner consistency represents the preference for placing points that are close together within the same
cluster, by requiring that the output of a k-clustering function not change if elements of the same
cluster are moved closer to each other.
Inner consistency is defined in a similar manner to outer-consistency, except that d0 is (C, d)-inner
consistent if d0X (x, y) ? dX (x, y) whenever x ?C y, and d0X (x, y) = dX (x, y) whenever x 6?C y.
Clearly, consistency implies both outer-consistency and inner-consistency. Note also that if a function is both inner-consistent and outer-consistent then it is consistent.
k-Richness: The k-richness property requires that we be able to obtain any partition of the domain by modifying the distances between elements. This property is based on Kleinberg?s [8] richness axiom, requiring that for any sets X1 , X2 , . . . , Xk , there exists a distance function d over
Sk
X 0 = i=1 Xi so that F (X 0 , d) = {X1 , X2 , . . . , Xk }. A k-clustering function F satisfies kSk
richness if for any sets X1 , X2 , . . . , Xk , there exists a distance function d over X 0 = i=1 Xi so
that F (X 0 , d, k) = {X1 , X2 , . . . , Xk }.
Outer richness: Outer richness, a natural variation on the k-richness property, was proposed in
[2] under the name ?extended richness.? (we have renamed it to contrast this property with ?inner
richness?, which we propose in Appendix A). Given k sets, a k-clustering function satisfies outer
richness if there exists some way of setting the between-set distances, without modifying distances
within the sets, we can get F to output each of these data sets as a cluster. This corresponds to the
intuition that any groups of points, regardless of within distances, can be made into separate clusters.
A clustering function F is outer-rich if for every set of domains, {(X1 , d1 ), . . . (Xn , dk )}, there
Sn
exists a distance function d? over i=1 Xi that extends each of the di ?s (for i ? k), such that
Sk
? k) = {X1 , X2 , . . . , Xk }.
F ( i=1 Xi , d,
Threshold-richness: Fundamentally, the goal of clustering is to group points that are close to each
other, and to separate points that are far apart. Axioms of clustering need to represent these objectives and no set of axioms of clustering can be complete without integrating such requirements.
4
outer consistent
inner consistent
local
refinement-confined
order invariant
k-rich
outer rich
inner rich
threshold rich
scale invariant
iso. invariant
Function
Single Linkage
Average Linkage
Complete Linkage
k-median
k-means
Min sum
Ratio cut
Normalized cut
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Figure 1: A taxonomy of k-clustering functions, illustrating what properties are satisfied by some
common k-clustering functions. The results in the k-means row apply both when the centers are part
of the data set and when the underlying space is Euclidean and the centers are arbitrary points in the
space.
Consistency is the only previous property that aims to formalize these requirements. However, consistency has some counterintuitive implications (see Section 3 in [1]), and is not satisfied by many
common clustering functions.
A k-clustering function F is threshold-rich if for every clustering C of X, there exist real numbers
a < b so that for every distance function d over X where d(x, y) ? a for all x ?C y, and d(x, y) ? b
for all x 6?C y, we have that F (X, d, |C|) = C.
This property is based on Kleinberg?s [8] ?-forcing property, and is equivalent to the requirement
that for every partition ?, there exists a < b so that (a, b) is ?-forcing.
Inner richness: Complementary to outer richness, inner richness requires that there be a way of
setting distances within sets, without modifying distances between the sets, so that F outputs each
set as a cluster. This corresponds to the intuition that between-cluster distances cannot eliminate
any partition of X. A k-clustering function F satisfies inner richness if for every data set (X, d)
and partition {X1 , X2 , . . . , Xk } of X, there exists a d? where for all a ? Xi , b ? Xj for i 6= j,
? b) = d(a, b), and F (Sk Xi , d,
? k) = {X1 , X2 , . . . , Xk }.
d(a,
i=1
Refinement-confined2 : The following formalization was proposed in [2]. A clustering C of X
is a refinement of clustering C 0 of X if every cluster in C is a subset of some cluster in C 0 , or,
equivalently, if every cluster of C 0 is a union of clusters of C. A k-clustering function is refinement
confined if for every 1 ? k ? k 0 ? |X|, F (X, d, k 0 ) is a refinement of F (X, d, k).
3
Property-Based Classification of Common k-Clustering Functions
In this section we present a taxonomy of common k-clustering functions. The taxonomy is presented in Figure 1 (definitions of the k-clustering functions are in Appendix C in the supplementary
material).
The taxonomy in Figure 1 illustrates how clustering algorithms differ from one another. For example, order-invariance and inner-consistency can be used to distinguish among the three common
linkage-based algorithms. Min-sum differs from k-means and k-median in that it satisfies innerconsistency. Unlike all the other algorithms discussed, the spectral clustering functions are not
local.
The proofs of the claims embedded in the table appear in the supplementary material.
2
In [2], this property was called ?hierarchical clustering?.
5
3.1
Axioms of clustering
Our taxonomy reveals that some intuitive properties, which may be expected of all k-clustering
functions, are not satisfied by some common k-clustering functions. For example, locality is not
satisfied by the spectral clustering functions ratio-cut and normalized-cut. Also, most functions fail
inner consistency, and therefore do not satisfy consistency, even though the latter was previously
proposed as an axiom of k-clustering functions [8].
On the other hand, isomorphism invariance, scale invariance, and all richness properties (in the setting where the number of clusters, k, is part of the input), are satisfied by all the clustering functions
considered. Isomorphism invariance and scale-invariance make for natural axioms. Threshold richness is the only one that is both satisfied by all k-clustering functions considered and reflects the
main objective of clustering: to group points that are close together and to separate points that are
far apart.
It is easy to see that threshold richness implies k-richness. It can be shown that when threshold richness is combined with scale invariance, it also implies outer-richness and inner-richness. Therefore,
we propose that scale-invariance, isomorphism-invariance, and threshold richness can be used as
clustering axioms.
However, we emphasize that these three axioms do not make a complete set of axioms for clustering,
since some functions that satisfy all three properties do not make reasonable k-clustering functions;
a function that satisfies the two invariance properties can also satisfy threshold richness by behaving
reasonably only on particularly well-clusterable data, while having counter-intuitive behavior on
other data sets.
4
Properties for Randomized k-Clustering Functions
We present a formal setting to study and analyze probabilistic k-clustering functions. A probabilistic
k-clustering function F takes a data set (X, d) and an integer 1 ? k ? |X| and outputs F (X, d, k),
a probability distribution over k-clusterings of X. Let P (F (X, d, k) = C) denote the probability of
clustering C in the probability distribution F (X, d, k).
4.1
Properties of Probabilistic k-Clustering Functions
We translate properties of different types into the probabilistic setting.
Invariance properties: Invariance properties specify when data sets should be clustered in the
same way (ex. isomorphism-invariance, scale-invariance, and order-invariance). Such properties are
translated into the probabilistic setting by requiring that when data sets (X, d) and (X 0 , d0 ) satisfy
some similarity requirements, then F (X, d, k) = F (X 0 , d0 , k) for all k.
Consistency properties: Consistency properties impose conditions that should improve the quality
of a clustering. Every such property has some notion of a ?(C, d)-nice? variant that specifies how
the underlying distance function can be modified to better flesh out clustering C. In the probabilistic
setting, such properties require that whenever d0 is a (C, d)-nice variant, the k-clustering function is
at least as likely to output C on d0 as on d, P [F (X, d0 , |C|) = C] ? P [F (X, d, |C|) = C].
Richness properties: Richness properties require that any desired clustering can be obtained under
certain constraints. In the probabilistic setting, we require that the same occurs with arbitrarily high
probability. For example, the following is the probabilistic version of the k-richness property. The
other variants of richness are reformulated analogously.
Definition 3 (k-Richness). A probabilistic k-clustering function F is k-rich if for any k-clustering C
of X and any > 0, there exists a distance function d over X so that P (F (X, d, k) = C) ? 1 ? .
Locality: We now show how to translate locality into the probabilistic setting. We say that a clustering of X specifies how to cluster a subset X 0 ? X if every cluster that overlaps with X 0 is
contained within X 0 . Locality requires that a k-clustering function cluster X 0 in the way specified
by the superset X.
6
threshold rich
scale invariant
iso. invariant
k-rich
outer rich
Other
local
Clustering Algorithm
Optimal k-means
Random Centroids Lloyd
Furthest Centroids Lloyd
Axioms
outer consistent
Properties
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Figure 2: An analysis of the k-means clustering function and k-means heuristics. The two leftmost
properties distinguish the k-means clustering function, properties that are satisfied by k-means but
fail for other reasonable k-clustering functions. The next three are proposed axioms of clustering,
and the last two properties follow from the axioms.
In the probabilistic setting, we require that the probability of obtaining a specific clustering of X 0 ?
X is determined by the probability of obtaining that clustering as a subset of F (X, d, k), given that
the output of F on (X, d) specifies how to cluster X 0 .
Definition 4 (Locality (probabilistic)). A probabilistic k-clustering function F is local if for
any k-clustering C 0 of X 0 , X 0 ? X, and j ? k, where P [?C1 , . . . , Ck s.t. ?ki=1 Ci = X 0 |
F (X, d, j) = C] 6= 0,
P [F (X 0 , d/X 0 , |C 0 |) = C 0 ] =
P [C 0 ? C | F (X, d, j) = C and C/X 0 is a k-clustering]
.
P [?C1 , . . . , Ck s.t. ?ki=1 Ci = X 0 | F (X, d, j) = C]
5
5.1
Properties Distinguishing K-means Heuristics
k-means and k-means heuristics
One of the most popular clustering algorithms is the Lloyd method, which aims to find clusterings
with low k-means loss. Indeed, the Lloyd method is sometimes referred to as the ?k-means algorithm.? We maintain a distinction between the k-means objective function and heuristics, such as
the Lloyd method, which aim to find clusterings with low k-means loss. For this section, we assume
that the data lie in Euclidean space, as is often the case when the Lloyd method is applied.
Definition 5 (Lloyd method). Given a data set (X, d), and a set S of points in Rn , the Lloyd
algorithm performs the following steps until two consecutive iterations return the same clustering.
1. Assign each point in X to its closest element of S. That is, find the clustering C of X so
that x ?C y if and only if argminc?S kc ? xk = argminc?S kc ? yk.
P
2. Compute the centers of mass of the clusters. Set S = {ci = |C1i | x?Ci x | Ci ? C}.
The Lloyd method is highly sensitive to the choice of initial centers. Perhaps the most common
method for initializing the centers for the Lloyd method is to select k random points from the input
data set, proposed by Forgy in 1965 [4]. We refer to this initialization method as Random Centroids.
We propose a slight variation on a deterministic initialization method by Katsavounidis, Kuo, and
Zhang [7], who propose selecting centers that are far apart. First let c1 and c2 be the two points
furthest away from each other. Then, for all 2 ? k, let ci be the point furthest away from its closest
existing center. That is, let ci be the point in X that maximizes min1?j?i?1 d(cj , ci ).
5.2
Distinguishing heuristics by properties
An analysis of the k-means clustering functions and the two k-means heuristics discussed above
is shown in Figure 2. The analysis illustrates that the k-means function differs significantly from
7
heuristics that aim to find clusterings with low k-means objective loss. The proofs for this analysis
were omitted due to a lack of space (they appear in the supplementary material).
There are two properties that are satisfied by the k-means clustering function and fail for other reasonable k-clustering functions: outer-consistency and locality. Neither is satisfied by the heuristics.
Note that unlike k-clustering functions that optimize common clustering objective functions, heuristics that aim to find clusterings with low loss for these objective functions do not necessarily make
meaningful k-clustering functions. Therefore, such heuristic?s failure to satisfy certain properties
does not preclude these properties from being axioms of clustering, but rather illustrates a weakness
of the heuristic.
It is interesting that the Lloyd method with the Furthest Centroids initialization technique satisfies
our proposed axioms of clustering while Lloyd with Random Centroid fails threshold richness. This
corresponds to the finding of He et. al. [5] that in practice, Furthest Centroids performs better than
Randomized Centroids.
6
Impossibility Results
In this final section, we strengthen Kleinberg?s famous impossibility result [8], for general clustering
functions, yielding a simpler proof of the original result.
Kleinberg impossibility theorem (Theorem 2.1, [8]) was that no general clustering function can
simultaneously satisfy scale-invariance, richness, and consistency. Ackerman and Ben-David[1]
later showed that consistency has some counter intuitive consequence. In Section 1, we showed that
many natural clustering functions fail inner-consistency3 , which implies that there are many general
clustering functions that fail consistency.
On the other hand, many natural algorithms satisfy outer-consistency. We strengthen Kleinberg?s
impossibility result by relaxing consistency to outer-consistency.
Theorem 1. No general clustering function can simultaneously satisfy outer-consistency, scaleinvariance, and richness.
Proof. Let F be any general clustering function that satisfies outer-consistency, scale-invariance and
richness.
Let X be some domain set with three or more elements. By richness, there exist distance functions
d1 and d2 such that F (X, d1 ) = {X} (every domain point is a cluster on its own) and F (X, d2 ) is
some different clustering, C = {C1 , . . . Ck } of X.
Let r = max{d1 (x, y) : x, y ? X} and let c be such that for every x 6= y, cd2 (x, y) ? r. Define
? y) = c ? d2 (x, y), for every x, y ? X. Note that d(x,
? y) ? d1 (x, y) for all x, y ? X. By
d(x,
?
? = F (X, d2 ). This
outer-consistency, F (X, d) = F (X, d1 ). However, by scale-invariance F (X, d)
is a contradiction since F (X, d1 ) and F (X, d2 ) are different clusterings.
A similar result can be obtained, using a similar proof, with inner-consistency replacing outer consistency. Namely,
Lemma 1. No general clustering function can simultaneously satisfy inner-consistency, scaleinvariance, and richness.
Since consistency implies both outer-consistency and inner-consistency, Kleinberg?s original result
follows from any one of Theorem 1 or Lemma 1.
Kleinberg?s impossibility result illustrates property trade-offs for general clustering functions. The
good news is that these results do not apply when the number of clusters is part of the input, as is
illustrated in our taxonomy; single linkage satisfies scale-invariance, consistency and richness.
3
Note that a clustering function and it?s corresponding general clustering function satisfy the same set of
consistency properties.
8
References
[1] M. Ackerman and S. Ben-David. Measures of Clustering Quality: A Working Set of Axioms
for Clustering. NIPS, 2008.
[2] M. Ackerman, S. Ben-David, and D. Loker. Characterization of Linkage-based Clustering.
COLT, 2010.
[3] R. Bosagh Zadeh and S. Ben-David. ?A Uniqueness Theorem for Clustering.? The 25th Annual
Conference on Uncertainty in Artificial Intelligence UAI, 2009.
[4] E. Forgy. Cluster analysis of multivariate data: efficiency vs. interpretability of classifications.
In WNAR meetings, Univ of Calif Riverside, number 768, 1965.
[5] He, J., Lan, M., Tan, C.-L., Sung, S. -Y., and Low, H.-B. (2004). Initialization of cluster
refinement algorithms: A review and comparative study. In Proc. IEEE Int. Joint Conf. Neural
Networks (pp. 297?-302).
[6] N. Jardine, R. Sibson, Mathematical Taxonomy Wiley, 1971.
[7] I. Katsavounidis, C.-C. J. Kuo, and Z. Zhang. A new initialization technique for generalized
Lloyd iteration. IEEE Signal Processing Letters, 1(10):144-146, 1994.
[8] Jon Kleinberg. ?An Impossibility Theorem for Clustering.? Advances in Neural Information
Processing Systems (NIPS) 15, 2002.
[9] U. von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing 17(4): 395-416,
2007
[10] L. Vendramin, R.J.G.B. Campello, and E.R. Hruschka. ?On the comparison of relative clustering validity criteria.? Sparks, 2009.
9
| 4101 |@word illustrating:1 version:1 seems:2 d2:5 pick:1 initial:1 selecting:2 denoting:1 existing:1 dx:6 must:1 readily:1 partition:7 v:1 intelligence:1 xk:8 affair:1 iso:2 characterization:4 bijection:2 preference:4 simpler:2 zhang:2 mathematical:1 c2:2 manner:2 pairwise:1 expected:1 indeed:2 behavior:5 examine:1 little:1 actual:1 preclude:1 provided:2 underlying:3 maximizes:1 mass:1 what:4 informed:1 c01:1 finding:1 sung:1 guarantee:1 every:18 partitioning:1 appear:2 local:6 consequence:1 meet:1 path:1 might:1 initialization:5 argminc:2 relaxing:1 jardine:3 practical:1 union:4 practice:1 differs:2 x3:3 axiom:20 significantly:1 word:1 integrating:1 spite:1 get:1 cannot:1 close:3 selection:1 applying:1 forgy:2 optimize:3 equivalent:1 deterministic:3 missing:1 center:7 straightforward:1 regardless:1 spark:1 identifying:1 contradiction:1 counterintuitive:1 embedding:1 searching:1 notion:2 variation:2 tan:1 user:7 strengthen:4 distinguishing:3 element:7 particularly:1 cut:4 min1:1 initializing:1 news:1 richness:36 ordering:1 counter:2 highest:1 trade:1 yk:1 principled:1 intuition:2 ultimately:1 carrying:1 efficiency:1 completely:1 translated:1 joint:1 separated:2 univ:1 artificial:1 outcome:2 choosing:1 whose:1 heuristic:14 quite:1 supplementary:3 say:5 otherwise:1 statistic:1 final:1 hoc:1 propose:8 ackerman:6 relevant:2 translate:3 intuitive:5 moved:2 cluster:34 requirement:6 optimum:1 produce:2 comparative:1 bosagh:2 ben:8 school:1 c:1 flesh:1 come:1 implies:5 distilling:1 differ:1 correct:1 modifying:3 subsequently:1 deter:1 material:3 require:6 assign:1 clustered:1 helping:1 sufficiently:1 considered:2 algorithmic:1 claim:1 major:1 vary:1 adopt:1 consecutive:1 omitted:1 uniqueness:1 proc:1 label:1 currently:2 waterloo:1 sensitive:1 tool:2 reflects:1 offs:1 clearly:2 aim:6 modified:1 ck:5 rather:1 focus:2 impossibility:10 contrast:1 centroid:7 sense:1 dependent:1 eliminate:1 kc:2 classification:3 ill:1 among:1 denoted:1 colt:1 proposes:2 special:1 aware:1 construct:1 having:1 x4:3 represents:2 placing:1 jon:1 fundamentally:1 few:1 simultaneously:4 replaced:1 intended:1 maintain:1 mining:1 highly:1 weakness:2 truly:1 yielding:1 implication:1 closer:1 euclidean:3 calif:1 desired:1 guidance:1 theoretical:1 asking:1 cost:2 addressing:2 subset:5 uniform:1 characterize:1 combined:1 randomized:4 probabilistic:13 together:2 analogously:1 concrete:1 von:1 satisfied:9 choose:2 conf:1 usable:1 return:1 coding:1 lloyd:13 int:1 satisfy:13 ad:1 depends:2 later:1 analyze:1 shai:2 variance:1 stretching:1 who:1 yield:3 famous:2 expertise:1 researcher:1 whenever:9 definition:8 failure:1 pp:1 involved:1 proof:6 di:1 popular:3 knowledge:3 cj:1 formalize:2 d0x:6 supervised:1 follow:1 specify:1 though:1 until:1 hand:4 working:1 replacing:1 lack:1 quality:5 perhaps:1 name:3 effect:1 validity:2 requiring:4 normalized:2 symmetric:1 illustrated:1 criterion:2 leftmost:1 generalized:1 complete:3 demonstrate:1 performs:2 subdomain:1 recently:1 began:1 common:12 functional:1 extend:1 he:4 belong:1 discussed:2 slight:1 significant:1 refer:1 consistency:38 access:1 similarity:4 behaving:1 etc:1 closest:2 own:1 showed:3 multivariate:1 optimizing:1 apart:3 forcing:2 certain:2 arbitrarily:1 meeting:1 impose:1 determine:1 paradigm:8 signal:1 desirable:1 sound:1 d0:25 c0k:1 variant:3 basic:2 vision:1 iteration:2 represent:1 sometimes:1 confined:3 c1:6 want:1 median:2 crucial:1 rest:1 unlike:2 integer:1 scaleinvariance:2 easy:1 superset:1 variety:2 independence:1 xj:1 gave:1 inner:19 utility:1 isomorphism:6 linkage:10 reformulated:2 riverside:1 dramatically:1 clear:2 detailed:1 specifies:3 exist:3 tutorial:1 katsavounidis:2 disjoint:1 write:1 clusterable:1 group:3 sibson:3 key:1 threshold:10 lan:1 neither:1 relaxation:2 merely:1 year:1 sum:2 luxburg:1 letter:1 uncertainty:1 extends:1 family:1 reasonable:3 decision:2 zadeh:2 coherence:1 scaling:1 appendix:2 ki:2 distinguish:4 annual:1 constraint:1 x2:10 software:1 kleinberg:15 aspect:1 min:2 developing:1 renamed:1 describes:1 remain:1 modification:2 intuitively:1 invariant:11 computationally:1 previously:1 turn:2 discus:1 fail:6 end:1 endowed:1 apply:3 hierarchical:1 away:3 appropriate:2 spectral:4 hruschka:1 original:3 top:2 clustering:186 running:2 unsuitable:1 unchanged:1 objective:13 occurs:1 distance:27 separate:3 outer:26 furthest:5 relationship:2 providing:2 ratio:2 margareta:1 loker:3 equivalently:1 taxonomy:10 implementation:1 guideline:1 ambitious:2 finite:2 extended:1 worthy:1 rn:1 arbitrary:1 canada:1 david:9 pair:3 namely:1 specified:1 distinction:2 heterogenous:1 nip:2 address:3 able:1 challenge:1 max:1 memory:1 interpretability:1 overlap:1 natural:8 c1i:1 improve:1 started:1 sn:1 faced:1 prior:1 understanding:3 nice:2 review:1 relative:1 embedded:2 loss:4 highlight:1 ksk:1 interesting:1 purchasing:1 consistent:14 cd:1 row:1 course:3 surprisingly:1 last:2 infeasible:1 guide:1 formal:2 wide:4 overcome:1 xn:1 rich:11 made:2 refinement:7 preprocessing:1 employing:1 far:3 emphasize:1 reveals:1 uai:1 xi:6 infeasibility:1 sk:3 table:1 reasonably:1 ca:1 obtaining:2 necessarily:1 domain:19 dense:1 main:2 uwaterloo:1 complementary:1 x1:11 referred:1 wiley:1 shrinking:1 formalization:1 fails:1 wish:1 candidate:1 lie:1 theorem:6 specific:4 dk:1 cd2:1 exists:11 essential:1 quantization:1 ci:11 illustrates:4 locality:9 likely:1 contained:1 applies:1 corresponds:3 satisfies:8 goal:3 endeavor:1 formulated:1 viewed:1 towards:3 change:2 determined:1 except:2 lemma:2 called:1 isomorphic:3 invariance:25 kuo:2 meaningful:2 exception:1 select:2 latter:1 incorporate:1 evaluate:1 d1:7 ex:1 |
3,426 | 4,102 | Collaborative Filtering in a Non-Uniform World:
Learning with the Weighted Trace Norm
Ruslan Salakhutdinov
Brain and Cognitive Sciences and CSAIL, MIT
Cambridge, MA 02139
[email protected]
Nathan Srebro
Toyota Technological Institute at Chicago
Chicago, Illinois 60637
[email protected]
Abstract
We show that matrix completion with trace-norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly, but that a properly weighted version of the trace-norm regularizer works well with non-uniform
sampling. We show that the weighted trace-norm regularization indeed yields significant gains on the highly non-uniformly sampled Netflix dataset.
1 Introduction
Trace-norm regularization is a popular approach for matrix completion and collaborative filtering,
motivated both as a convex surrogate to the rank [7, 6] and in terms of a regularized infinite factor
model with connections to large-margin norm-regularized learning [17, 1, 15].
Current theoretical guarantees on using the trace-norm for matrix completion assume a uniform
sampling distribution over entries of the matrix [18, 6, 5, 13]. In a collaborative filtering setting,
where rows of the matrix represent e.g. users and columns represent e.g. movies, this corresponds
to assuming all users are equally likely to rate movies and all movies are equally likely to be rated.
This of course cannot be further from the truth, as invariably some users are more active than others
and some movies are rated by many people while others are rarely rated.
In this paper we show, both analytically and through simulations, that this is not a deficiency of
the proof techniques used to establish the above guarantees. Indeed, a non-uniform sampling distribution can lead to a significant deterioration in prediction quality and an increase in the sample
complexity. Under non-uniform sampling, as many as ?(n4/3 ) samples might be needed for learning even a simple (e.g. orthogonal low rank) n ? n matrix. This is in sharp contrast to the uniform
?
sampling case, in which O(n)
samples are enough. It is important to note that if the rank could
?
be minimized directly, which is in general not computationally tractable, O(n)
samples would be
enough to learn a low-rank model even under an arbitrary non-uniform distribution.
Our analysis further suggests a weighted correction to the trace-norm regularizer, that takes into
account the sampling distribution. Although appearing at first as counter-intuitive, and indeed being the opposite of a previously suggested weighting [21], this weighting is well-motivated by our
analytic analysis and we discuss how it corrects the problems that the unweighted trace-norm has
with non-uniform sampling. We show how the weighted trace-norm indeed yields a significant
improvement on the highly non-uniformly sampled Netflix dataset.
The only other work we are aware of that studies matrix completion under non-uniform sampling
is work on exact completion (i.e. when the matrix is assumed to be exactly low rank) under powerlaw sampling [12]. Other then being limited to one specific distribution, the requirement of the
matrix being exactly low rank is central to this work, and the results cannot be directly applied
in the presence of even small noise. Empirically, the approach leads to deterioration in predictive
performance on the Netflix data [12].
1
2 Complexity Control in terms of Matrix Factorizations
Consider the problem of predicting the entries of some unknown target matrix Y ? Rn?m based
on a random subset S of observed entries YS . For example, n and m may represent the number of
users and the number of movies, and Y may represent a matrix of partially observed rating values.
Predicting elements of Y can be done by finding a matrix X minimizing the training error, here
measured as a squared error, and some measure c(X) of complexity. That is, minimizing either:
2
min kXS ? YS kF + ?c(X)
X
or:
min kXS ? YS k2F ,
c(X)?C
(1)
(2)
where YS , and similarly XS , denotes the matrix ?masked? by S, where (YS )i,j = Yi,j if (i, j) ? S
and 0 otherwise. For now we ignore possible repeated entries in S and we also assume that n ? m
without loss of generality. The two formulations (1) and (2) are equivalent up to some (unknown)
correspondence between ? and C, and we will be referring to them interchangeably.
A basic measure of complexity is the rank of X, corresponding to the minimal dimensionality k such
that X = U ? V for some U ? Rk?n and V ? Rk?m . Directly constraining the rank of X forms
one of the most popular approaches to collaborative filtering. However, the rank is non-convex and
hard to minimize. It is also not clear if a strict dimensionality constraint is most appropriate for
measuring the complexity.
Trace-norm Regularization
Lately, methods regularizing the norm of the factorization U ? V , rather than its dimensionality, have
been advocated and were shown to enjoy considerable empirical success [14, 15]. This corresponds
to measuring complexity in terms of the trace-norm of X, which can be defined equivalently either
as the sum of the singular values of X, or as [7]:
1
2
2
(kU kF + kV kF ),
(3)
2
where the dimensionality of U and V is not constrained. Beyond the modeling appeal of normbased, rather than dimension-based, regularization, the trace-norm is a convex function of X and so
can be minimized by either local search or more sophisticated convex optimization techniques.
kXktr = min?
X=U V
Scaling
The rank, as a measure of complexity, does not scale with the size of the matrix. That is, even very
large matrices can have low rank. Viewing the rank as a complexity measure corresponding to the
number of underlying factors, if data is explained by e.g. two factors, then no matter how many rows
(?users?) and columns (?movies?) we consider, the data will still have rank two. The trace-norm,
however, does scale with the size of the matrix. To see this, note that the trace-norm is the ?1 norm
of the spectrum, while the Frobenius norm is the ?2 norm of the spectrum, yielding:
p
?
kXkF ? kXktr ? kXkF rank(X) ? n kXkF .
(4)
The Frobenius norm certainly increases with the size of the matrix, since the magnitude of each element does not decrease when we have more elements, and so the trace-norm will also increase. The
above suggests measuring the trace-norm relative to the Frobenius norm. Without loss of generality,
consider each target entry to be of roughly unit magnitude, and so in order to fit Y each
? entry of
X must also be of roughly unit magnitude. This suggests scaling the trace-norm by nm. More
specifically, we study the trace-norm through the complexity measure:
2
kXktr
,
(5)
nm
which puts the trace-norm on a comparable scale to the rank. In particular, when each entry of X is,
on-average, of unit magnitude (i.e. has unit variance) we have 1 ? tc(X) ? rank(X).
tc(X) =
The relationship between tc(X) and the rank is tight for ?orthogonal? low-rank matrices, i.e. lowrank matrices X = U ? V where the rows of U and also the rows of V are orthogonal and of equal
2
magnitudes. In order for the entries in Y to have unit magnitude, i.e. kY kF = nm, we have that rows
2
q ?
q ?
in U have norm n/ k and rows in V have norm m/ k, yielding precisely tc(X) = rank(X).
Such an orthogonal low-rank matrix
? can be obtained, e.g., when entries of U and V are zero-mean
i.i.d. Gaussian with variance 1/ k, corresponding to unit-variance entries in X.
Generalization Guarantees
Another place where we can see that tc(X) plays a similar role to rank(X) is in the generalization
and sample complexity guarantees that can be obtained for low-rank and low-trace-norm learning.
If there is a low-rank matrix X ? achieving low average error relative to Y (e.g. if Y = X ? + noise),
then by minimizing the training error subject to a rank constraint (a computationally intractable
?
?
task), |S| = O(rank(X
)(n + m)) samples are enough in order to guarantee learning a matrix X
whose overall average error is close to that of X ? [16]. Similarly, if there is a low-trace-norm matrix
X ? achieving low average error, then minimizing the training error and the trace-norm (a convex
?
?
optimization problem), |S| = O(tc(X
)(n + m)) samples are enough in order to guarantee learning
a matrix X whose overall average error is close to that of X ? [18]. In these bounds tc(X) plays
precisely the same role as the rank, up to logarithmic factors.
In order to get some intuitive understanding of low-rank learning guarantees, it is enough to consider
the number of parameters in the rank-k factorization X = U ? V . It is easy to see that the number of
parameters in the factorization is roughly k(m + n) (perhaps a bit less due to rotational invariants).
We therefore would expect to be able to learn X when we have roughly this many samples, as is
indeed confirmed by the rigorous sample complexity bounds.
For low-trace-norm learning, consider a sample S of sizep|S| ? Cn,
? for some constant C. Taking
entries of Y to be of unit magnitude, we have kYS kF = ?
|S| ? Cn (recall
? that YS is defined to
?
be zero outside S). From (4) we therefore have: kYS ktr ? Cn ? n = Cn and so tc(YS ) ? C.
That is, we can ?shatter? any sample of size |S| ? Cn with tc(X) = C: no matter what the
underlying matrix Y is, we can always perfectly fit the training data with a low trace-norm matrix
X s.t. tc(X) ? C, without generalizing at all outside S. On the other hand, we must allow matrices
with tc(X) = tc(X ? ), otherwise we can not hope to find X ? , and so we can only constrain tc(X) ?
C = tc(X ? ). We therefore cannot expect to learn with less than ntc(X ? ) samples. It turns out that
this is essentially the largest random sample that can be shattered with tc(X) ? C = tc(X ? ). If we
have more than this many samples we can start learning.
3 Trace-Norm Under a Non-Uniform Distribution
In this section, we analyze trace-norm regularized learning when the sampling distribution is not
uniform. That is, when there is some, known or unknown, non-uniform distribution D over entries
of the matrix Y (i.e. over index pairs (i, j)) and our sample S is sampled i.i.d. from D. Our objective
is to get low average error with respect to the distribution D. That is, we measure generalization
performance in terms of the weighted sum-squared-error:
X
D(i, j)(Xij ? Yij )2 .
(6)
kX ? Y k2D = E(i,j)?D (Xij ? Yij )2 =
ij
We first point out that when using the rank for complexity control, i.e. when minimizing the training
error subject to a low-rank constraint, non-uniformity does not pose a problem. The same generalization and learning guarantees that can be obtained in the uniform case, also hold under an arbitrary
2
distribution D. In particular, if there is some low-rank X ? such that kX ? ? Y kD is small, then
?
?
O(rank(X
)(n + m)) samples are enough in order to learn (by minimizing training error subject to
2
2
a rank constraint) a matrix X with kX ? Y kD almost as small as kX ? ? Y kD [16].
However, the same does not hold when learning using the trace-norm. To see this, consider an
orthogonal rank-k square n?n matrix, and a sampling distribution which is uniform over an nA ?nA
sub-matrix A, with nA = na . That is, the row (e.g. ?user?) is selected uniformly among the first nA
rows, and the column (e.g. ?movie?) is selected uniformly among the first nA columns. We will use
A to denote the subset of entries in the submatrix, i.e. A = {(i, j)|1 ? i, j ? nA }. For any sample
S, we have:
2
kYS k2tr
kYS kF rank(YS )
|S|na
|S|
tc(YS ) =
?
?
= 2?a ,
(7)
2
2
n
n
n2
n
3
where we again take the entries in Y to be of unit magnitude. In the second inequality above we
use the fact that YS is zero outside of A, and so we can bound the rank of YS by the dimensionality
nA = na of A.
Setting a < 1, we see that we can shatter any sample of size1 kn2?a = ?
? (n) with a matrix X for
?
which tc(X)<k. When a ? 1/2, the total number of entries in A is less than n. In this case O(n)
?
observations are enough in order to memorize2 YA . But when 1/2 < a < 1, with O(n)
observations,
restricting to even tc(X) < 1, we can neither learn Y , since we can shatter YS , nor memorize it. For
example, when a = 2/3 and so nA = n2/3 , we need roughly n4/3 to start learning by constraining
tc(X) to a constant ? the same as we would need in order to memorize YA . This is a factor of n1/3
greater than the sample size needed to learn a matrix with constant tc(X) in the uniform case.
The above arguments establish that restricting the complexity to tc(X) < k might not lead to gen?
eralization with O(kn)
samples in the non-uniform case. But does this mean that we cannot learn a
?
rank-k matrix by minimizing the trace-norm using O(kn)
samples when the sampling distribution
is concentrated on a small submatrix? Of course this is not the case. Since the samples are uniform
on a small submatrix, we can just think of the submatrix A as our entire space. The target matrix
still has low rank, even when restricted to A, and we are
? back in the uniform sampling scenario.
The only issue here is that tc(X) ? k, i.e. kXktr ? n k, is the right constraint in the uniform
observation scenario. When samples
? are concentrated in nA , we actually aneed to restrict to a much
?
smaller trace norm, kXktr ? na k, which will allow learning with O(kn
) samples.
We can, however, modify the example and construct a sampling distribution under which ?(n4/3 )
samples are required in order to learn even an ?orthogonal? low-rank matrix, no matter what con?
straint is placed on the trace-norm. This is a significantly large sample complexity than O(kn),
which is what we would expect, and what is required for learning by constraining the rank directly.
To do so, consider another submatrix B of size nB ? nB with nB = n/2, such A
that the rows and columns of A and B do not overlap (see figure). Now, consider
B
a sampling distribution D which is uniform over A with probability half, and uni?
form over B with probability half. Consider fitting a noisy matrix Y = X + noise
where X ? is ?orthogonal? rank-k.
In order to fit on B, we need to allow a trace?
?
norm of at least kXB
ktr = n2 k, i.e. allow tc(X) = k/4. But as discussed above,
with such a generous constraint on the trace-norm, we will be able to shatter S ? A whenever
|S ? A| = |S|/2 ? kn2?a /4. Since there is no overlap in rows and columns, and so values in the
sub-matrices A and B are independent, shattering S ?A means we cannot hope to learn in A. Setting
a=2/3 as before, with o(n4/3 ) samples, we cannot learn in A and B jointly: either we constrain to
?
a trace-norm which is too low to fit XB
(we under-fit on B), or we allow a trace-norm which is high
enough to overfit YS?A . In any case, we will make errors on at least half the mass of D.3
Empirical Example
Let us consider a simple simulation experiment that will help us illustrates this phenomenon. Consider a simple synthetic example, where we used nA = 300 and nB = 4700, with an orthogonal
rank-2 matrix X ? and Y = X ? + N (0, 1) (in case of repeated entries, the noise is independent for
each appearance in the sample). The training sample size was also set to |S|=140,000.
2
2
2
The three curves of Fig. 1 measure the excess (test) error kX ? X ? kD = kX ? Y kD ?kY ? X ? kD
of the learned model, as well as the error contribution from A and from B, as a function of the
constraint on tc(X), for the sampling distribution discussed above and a specific sample size. As
can be seen, although it is possible to constrain tc(X) so as to achieve squared-error of less than 0.8
on B, this constraint is too lax for A and allows for over-fitting. Constraining tc(X) so as to avoid
overfitting A (achieving almost zero excess test error), leads to a suboptimal fit on B.
p
g(n)
? 0.
Recall that f (n) = ?
? (g(n)) iff for all p, g(n) flog
(n)
2
The algorithm saw all (or most) entries of the matrix and does not need to predict any unobserved entries.
3
More accurately, if we do allow high enough trace-norm to fit B, and |S| = o(n4/3 ), then the ?cost? of
?
overfitting YS?A is negligible compared to the cost of fitting XB
. For large enough n, we would be tempted to
?
very slightly deteriorate the fit of XB
in order to ?free up? enough trace-norm and completely overfit YS?A.
1
4
0.8
A+B
0.6
0.4
A
0.2
?2
?1
10
10
0
1
10
10
1
0.8
0.6
A+B
0.4
A
0.2
0
10
15
20
tc (X)
tc(X)
B
1
B
Mean Squared Error
Mean Squared Error
Mean Squared Error
B
1
0 ?3
10
1.2
1.2
1.2
0.8
A+B
0.6
0.4
shift A+B
A
0.2
shift A
25
30
pq
0 ?2
10
?1
0
10
10
Regularization parameter ?
1
10
Figure 1: Left: Mean squared error (MSE) of the learned model as a function of the constraint on tc(X)
(left), tcpq (X) (middle). Right: The solid curves show the optimum of the mean squared error objective
(9) (unweighted trace-norm), as a function of the regularization parameter ?. The dashed curves display a
weighted trace-norm. The black (middle) curve is the overall MSE error, the red (bottom) curve measures only
the contribution from A, and the blue (top) curve measures only the contribution from B.
Penalty Formulation
Until now we discussed learning by constraining the trace-norm, i.e. using the formulation (2). It is
also insightful to consider the penalty view (1), i.e. learning by minimizing
min kYS ? XS k2F + ? kXktr .
(8)
X
First observe that the characterization (3) allows us to decompose kXktr = kXA ktr + kXB ktr ,
where w.l.o.g. we take all columns of U and V outside A and B to be zero. Since we also have
2
2
2
kYS ? XS kF = kYA?S ? XA?S kF + kYB?S ? XB?S kF , we can decompose the training objective
(8) as:
2
2
2
kYS ? XS kF + ? kXktr = (kYA?S ? XA?S kF + ? kXA ktr ) + (kYB?S ? XB?S kF + ? kXB ktr )
p
p
(9)
= kYA?S ? XA?S k2F + ?nA tcA (XA ) + kYB?S ? XB?S k2F + ?nB tcB (XB ) ,
2
where tcA (XA ) = kXA ktr /n2A (and similarly tcB (XB )) refers to the complexity measure tc(?)
measured relative to the size of A (similarly B). We see that the training objective decomposes
to objectives over A and B. Each one of these corresponds to a trace-norm regularized learning
problem, under a uniform sampling distribution (in the corresponding submatrix) of a noisy low-rank
?
?
?orthogonal? matrix, and can therefor be learned with O(kn
A ) and O(knB ) samples respectively.
?
In other words, O(kn) samples should be enough to learn both inside A and inside B.
However, the regularization tradeoff parameter ? compounds the two problems. When the objective
is expressed in terms of tc(?), as in (9), the regularization tradeoff is scaled differently in each part
?
of the training objective. With O(kn)
samples, it is possible to learn in A with some setting of ?,
and it is possible to learn in B with some other setting of ?, but from the discussion above we learn
that no single value of ? will allow learning in both A and B. Either ? is too high yielding too strict
regularization in B, so learning on B is not possible, perhaps since it is scaled by nB ? nA . Or ?
is too small and does not provide enough regularization in A.
Returning to our simulation experiment, the solid curves of Fig. 1, right panel, show the excess
test error for the minimizer of the training objective (9), as a function of the regularization tradeoff
parameter ?. Note that these are essentially the same curves as displayed in Fig. 1, except the
path of regularized solutions is now parameterized by ? rather than by the bound on tc(X). Not
surprisingly, we see the same phenomena: different values of ? are required for optimal learning on
A and on B. Forcing the same ? on both parts of the training objective (9) yields a deterioration in
the generalization performance.
4 Weighted Trace Norm
The decomposition (9) and the discussion in the previous section suggests weighting the trace-norm
by the frequency of rows and columns. For a sampling distribution D, denote by p(i) the row
marginal, i.e. the probability of observing row i, and similarly denote by q(j) the column marginal.
We propose using the following weighted version of the trace-norm as a regularizer:
X
?
?
1 X
2
2
kXktr(p,q) = kdiag( p)Xdiag( q)ktr = min?
q(j), kVj k
(10)
p(i) kUi k +
X=U V 2
j
i
5
p
?
?
where diag( p) is a diagonal matrix with p(i) on its diagonal (similarly diag( q)). The corre2
sponding normalized complexity measure is given by tcpq (X) = kXktr(p,q) . Note that for a uniform
distribution we have that tcpq (X) = tc(X). Furthermore, it is easy to verify that for an ?orthogonal?
rank-k matrix X we have tcpq (X) = k for any sampling distribution.
Equipped with the weighted trace-norm as a regularizer, let us revisit the problematic sampling
distribution studied in the previous Section.
to fit the ?orthogonal? rank-k X ? , we need a
p In order ?
?
weighted trace-norm of kX ktr(p,q) = tcpq (X) = k. How large a sample S ? A can we now
?
shatter using such a weighted trace-norm? We can shatter a sample if kYS?A ktr ? k. We can
calculate:
p
p
kYS?A ktr(p,q) = kYS?A ktr /(2nA ) ? |S ? A|nA /(2nA ) = |S|/(8nA ).
(11)
That is, we can shatter a sample of size up to |S| = 8knA < 8kn. The calculation for B is identical.
It seems that now, with a fixed constraint on the weighted trace-norm, we have enough capacity to
?
both fit X ? , and with O(kn)
samples, avoid overfitting on A.
Returning to the penalization view (2) we can again decompose the training objective as:
2
kYS ? XS kF + ? kXktr(p,q) =
p
p
2
2
= kYA?S ? XA?S kF + ?/2 tcA (XA ) + kYB?S ? XB?S kF + ?/2 tcB (XB )
(12)
avoiding the scaling by the block sizes which we encountered in (9).
Returning to the synthetic experiments of Fig. 1 (right panel), and comparing (9) with (12), we see
that introducing the weighting corresponds to a relative change of nA /nB in the correspondence of
A
the regularization tradeoff parameters used for A and for B. This corresponds to a shift of log nnB
in the log-domain used in the figure. Shifting the solid red (bottom) curve by this amount yields
the dashed red (bottom) curve. The solid blue (top) curve and the dashed red (bottom) curve thus
represent the excess error on B and on A when the weighted trace norm is used, i.e. the training
objective (12) is minimized. The dashed black (middle) curve is the overall excess error when using
this training objective. As can be seen, the weighting aligns the excess errors on A and on B much
better, and yields a lower overall error. The weighted trace-norm achieves the lowest MSE of 0.4301
with corresponding ? = 0.11. This is compared to the lowest MSE of 0.4981 with ? = 0.80,
achieved by the unweighted trace-norm.
It is also interesting to observe that the weighted trace-norm outperforms its unweighted counterpart
for a wide range of regularization parameters ? ? [0.01; 0.6]. This may also suggest that in practice, particularly when working with large and imbalanced datasets, it may be easier to search for
regularization parameters using weighted trace-norm.
Finally, Fig. 1, right panel, also suggests that the optimal shift might actually be smaller than
nA /nB . We can consider a smaller shift by using the partially-weighted trace-norm:
X
1 X
2
2
q(j)? kVj k ).
kXktr(p,q,?) =
diag(p?/2 )Xdiag(q ?/2 )
= min
p(i)? kUi k +
(
?
tr
X=U V 2
j
i
2
and the corresponding normalized complexity measure tc? (X) = kXktr(p? /n1?? ,q? /m1?? ) .
Other Weightings and Bayesian Perspective
The weighted trace-norm motivated by the analysis here (with ? = 1) implies that the frequent users
(equivalently movies) get regularized much stronger than the rare users (equivalently movies). This
might at first seem quite counter-intuitive as the natural weighting might seem to be the opposite.
Indeed, Weimer et al. [21] speculated that with a uniform weighting (? = 0) frequent users are
regularized too heavily compared to infrequent users, and so suggested regularizing frequent users
(and movies) with a lower weight, corresponding to ? = ?1. Although this might seem natural, we
saw here that the reverse is actually true ? the Weimer et al. weighting (? = ?1) would only make
things worse. Indeed, given the analysis here, Weimer et al. actually observed a deterioration in
prediction quality when using their weighting. This is also demonstrated in the experiments on the
Netflix data in Section 6.
6
The weighted regularization motivated here (with ? = 1) is also quite unusual from Bayesian perspective. The trace-norm can be viewed as a negative-log-prior for the Probabilistic Matrix Factorization model [15], where entries of U, V are taken to be i.i.d. Gaussian. The two terms of (8) can
then be interpreted as a log-likelihood and log-prior, and minimizing (8) corresponds to finding the
MAP parameters. Introducing weighting (with ? = 1) effectively states that the effect of the prior
becomes stronger as we observe more data. Yet, our analysis strongly suggest that in non-uniform
setting, such ?unorthodox? regularization is crucial for achieving good generalization performance.
5 Practical Implementation
When dealing with large datasets, such as the Netflix data, the most practical way
P to fit trace-norm
regularized models is through stochastic gradient descent [15, 8]. Let ni =
j Sij and mj =
P
S
denote
the
number
of
observed
ratings
for
user
i
and
movie
j
respectively.
The training
ij
i
objective using a partially-weighted trace-norm 10 can be written as:
X
2 ? p(i)?
q(j)?
Yij ? Ui? Vj +
,
kUi k2 +
kVj k2
2
ni
mj
{i,j}?S
where U ? Rk?n and V ? Rk?m . We can optimize this objective using stochastic gradient descent
by picking one training pair (i, j) at random at each iteration, and taking a step in the direction
opposite the gradient of the term corresponding to the chosen (i, j).
Note that even though the objective (13) as a function of U and V is non-convex, there are no nonglobal local minima if we set k to be large enough, i.e. k > min(n, m) [2]. However, in practice
using very large values of k becomes computationally expensive. Instead, we consider truncated
trace-norm minimization by restricting k to smaller values. In the next section we demonstrate
that even when using truncated trace-norm, its weighted version significantly improves model?s
prediction performance.
In our experiments, we also replace unknown row p(i) and column q(j) marginals in (13) by their
empirical estimates p?(i) = ni/|S| and q?(j) = mj/|S|. This results in the following objective:
X
2
?
2
2
??1
??1
?
Yij ? Ui Vj +
.
(13)
n
kUi k + mj kVj k
2|S| i
{i,j}?S
Setting ? = 1, corresponding to the weighted trace-norm (10), results in stochastic gradient updates
that do not involve the row and column counts at all and are in some sense the simplest. Strangely,
and likely originating as a ?bug? in calculating the stochastic gradients by one of the participants,
these steps match the stochastic training used by many practitioners on the Netflix dataset, without
explicitly considering the weighted trace-norm [8, 19, 15].
6 Experimental results
We tested the weighted trace-norm on the Netflix dataset, which is the largest publicly available collaborative filtering dataset. The training set contains 100,480,507 ratings from 480,189 anonymous
users on 17,770 movie titles. Netflix also provides qualification set, containing 1,408,395 ratings,
out of which we set aside 100,000 ratings for validation. The ?qualification set? pairs were selected
by Netflix from the most recent ratings for a subset of the users. Due to the special selection scheme,
ratings from users with few ratings are overrepresented in the qualification set, relative to the training set. To be able to report results where the train and test sampling distributions are the same, we
also created a ?test set? by randomly selecting and removing 100,000 ratings from the training set.
All ratings were normalized to be zero-mean by subtracting 3.6. The dataset is very imbalanced: it
includes users with over 10,000 ratings as well as users who rated fewer than 5 movies.
For various values of ?, we learned a factorization U ? V with k = 30 and with k = 100 dimensions
(factors) using stochastic gradient descent as in (13). For each value of ? and k we selected the
regularization tradeoff ? by minimizing the error on the 100,000 qualification set examples set aside
for validation. Results on both the Netflix qualification set and on the test set we created are reported
in Table 1. Recall that the sampling distribution of the ?test set? matches that of the training data,
while the qualification set is sampled differently, explaining the large difference in generalization
between the two.
7
Table 1: Root Mean Squared Error (RMSE) on the Netflix qualification set and on a test set that was held out
from the training data, for training by minimizing (13). We report ?/|S| minimizing the error on the validation
set (held out from the qualification set), qualification and test errors using this tradeoff, and tc? (X) at the
optimum. Last row: training by regularizing the max-norm.
?
1
0.9
0.75
0.5
0
-1
kXkmax
k
30
30
30
30
30
30
30
?/|S| tc? (X)
0.05
4.34
0.07
4.27
0.2
5.04
0.5
7.32
2.5
10.36
450
11.41
mc(X) = 5.06
Test
0.7607
0.7573
0.7723
0.7823
0.7889
0.7913
0.7692
Qual
0.9105
0.9091
0.9128
0.9159
0.9235
0.9256
0.9131
k
100
100
100
100
100
100
100
?/|S| tc? (X)
0.08
5.47
0.1
5.23
0.3
6.24
0.8
9.65
3.0
21.23
700
23.31
mc(X) = 5.77
Test
0.7412
0.7389
0.7491
0.7613
0.7667
0.7713
0.7432
Qual
0.9071
0.9062
0.9098
0.9127
0.9203
0.9221
0.9092
For both k = 30 and k = 100, the weighted trace-norm (? = 1) significantly outperformed the
unweighted trace-norm (? = 0). Interestingly, the optimal weighting (setting of ?) was a bit lower
then, but very close to ? = 1. For completeness, we also evaluated the weighting suggested by
Weimer et al. [21], corresponding to ? = ?1. Unsurprising, given our analysis, this seemingly
intuitive weighting hurts predictive performance.
For both k = 30 and k = 100, we also observed that for the weighted trace-norm (? = 1) good
generalization is possible with a wide range of ? settings, while for the unweighted trace-norm
(? = 0), the results were much more sensitive to the setting of ?. This confirms our previous results
on the synthetic experiment and strongly suggests that it may be far easier to search for regularization
parameters using the weighted trace-norm.
Comparison with the Max-Norm
We also compared the predictive performance on Netflix to predictions based on max-norm regularization. The max-norm is defined as:
1
2
2
kXkmax = min?
(max kUi k + max kVj k ).
(14)
i
j
X=U V 2
Similarly to the rank, but unlike the trace-norm, generalization and learning guarantees based on the
max-norm hold also under an arbitrary, non-uniform, sampling distribution. Specifically, defining
2
?
mc(X) = kXkmax (no normalization is necessary here), O(mc(X)(n
+ m)) samples are enough for
generalization w.r.t. any sampling distribution (just like the rank) [18]. This suggests that perhaps the
max-norm can be used as an alternative factorization-regularization in the presence of non-uniform
sampling. Indeed, as evident in Table 1, max-norm based regularization does perform much better
then the unweighted trace-norm. The differences between the max-norm and the weighted tracenorm are small, but it seems that using the weighted trace-norm is slightly but consistently better.
7 Summary
In this paper we showed both analytically and empirically that under non-uniform sampling, tracenorm regularization can lead to significant performance deterioration and an increase in sample
complexity. Our analytic analysis suggests a non-intuitive weighting for the trace-norm in order to
correct the problem. Our results on both synthetic and on the highly imbalanced Netflix datasets further demonstrate that the weighted trace-norm yields significant improvements in prediction quality.
In terms of optimization, we focused on stochastic gradient descent,both since it is a simple and
practical method for very large-scale trace-norm optimization [15, 8], and since the weighting was
originally stumbled upon through this optimization approach. However, most recently proposed
methods for trace-norm optimization (e.g. [3, 10, 9, 11, 20]) can also be easily modified for the
weighted trace-norm.
We hope that the weighted trace-norm, and the discussions in Sections 3 and 4, will be helpful
in deriving theoretical learning guarantees for arbitrary non-uniform sampling distributions, both in
the form of generalization error bounds as in [18], and generalizing the compressed-sensing inspired
work on recovery of noisy low-rank matrices as in [4, 13].
Acknowledgments RS is supported by NSERC, Shell, and NTT Communication Sciences Laboratory.
8
References
[1] J. Abernethy, F. Bach, T. Evgeniou, and J.P. Vert. A new approach to collaborative filtering: Operator estimation with spectral regularization. Journal of Machine Learning Research,
10:803?826, 2009.
[2] S. Burer and R.D.C. Monteiro. Local minima and convergence in low-rank semidefinite programming. Mathematical Programming, 103(3):427?444, 2005.
[3] J.F. Cai, E.J. Cand`es, and Z. Shen. A Singular Value Thresholding Algorithm for Matrix
Completion. SIAM Journal on Optimization, 20:1956, 2010.
[4] E.J. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE (to appear),
2009.
[5] E.J. Candes and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9, 2009.
[6] E.J. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Trans. Inform. Theory (to appear), 2009.
[7] M. Fazel, H. Hindi, and S.P. Boyd. A rank minimization heuristic with application to minimum
order system approximation. In Proceedings American Control Conference, volume 6, 2001.
[8] Yehuda Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering
model. In ACM SIGKDD, pages 426?434, 2008.
[9] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation with
application to system identification. SIAM Journal on Matrix Analysis and Applications,
31(3):1235?1256, 2009.
[10] S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank
minimization. Mathematical Programming, pages 1?33, 2009.
[11] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral Regularization Algorithms for Learning
Large Incomplete Matrices. Journal of Machine Learning Research, 11:2287?2322, 2010.
[12] R. Meka, P. Jain, and I. S. Dhillon. Matrix completion from power-law distributed samples. In
Advances in Neural Information Processing Systems, volume 21, 2009.
[13] B. Recht. A simpler approach to matrix completion. preprint, available from author?s webpage,
2009.
[14] J.D.M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative
prediction. In ICML, page 719, 2005.
[15] Ruslan Salakhutdinov and Andriy Mnih. Probabilistic matrix factorization. In Advances in
Neural Information Processing Systems, volume 20, 2008.
[16] N. Srebro, N. Alon, and T. Jaakkola. Generalization error bounds for collaborative prediction
with low-rank matrices. In Advances In Neural Information Processing Systems 17, 2005.
[17] N. Srebro, J. Rennie, and T. Jaakkola. Maximum margin matrix factorization. In Advances In
Neural Information Processing Systems 17, 2005.
[18] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In COLT, 2005.
[19] G?abor Tak?acs, Istv?an Pil?aszy, Botty?an N?emeth, and Domonkos Tikk. Scalable collaborative
filtering approaches for large recommender systems. Journal of Machine Learning Research,
10:623?656, 2009.
[20] R. Tomioka, T. Suzuki, M. Sugiyama, and H. Kashima. A fast augmented lagrangian algorithm
for learning low-rank matrices. In ICML, pages 1087?1094, 2010.
[21] M. Weimer, A. Karatzoglou, and A. Smola. Improving maximum margin matrix factorization.
Machine Learning, 72(3):263?276, 2008.
9
| 4102 |@word middle:3 version:3 stronger:2 norm:96 seems:2 confirms:1 simulation:3 r:1 decomposition:1 tr:1 solid:4 liu:1 contains:1 ntc:1 selecting:1 interestingly:1 outperforms:1 current:1 comparing:1 yet:1 must:2 written:1 chicago:2 analytic:2 kyb:4 update:1 aside:2 half:3 selected:4 fewer:1 characterization:1 provides:1 completeness:1 simpler:1 mathematical:2 shatter:7 fitting:3 inside:2 deteriorate:1 indeed:8 roughly:5 cand:1 nor:1 brain:1 salakhutdinov:2 inspired:1 equipped:1 considering:1 becomes:2 underlying:2 panel:3 mass:1 lowest:2 what:4 interpreted:1 flog:1 shraibman:1 finding:2 unobserved:1 guarantee:10 exactly:2 returning:3 scaled:2 k2:2 control:3 unit:8 enjoy:1 appear:2 before:1 negligible:1 local:3 modify:1 qualification:9 meet:1 path:1 might:6 black:2 studied:1 suggests:8 limited:1 factorization:12 range:2 fazel:1 practical:3 acknowledgment:1 practice:2 block:1 yehuda:1 empirical:3 significantly:4 vert:1 boyd:1 word:1 refers:1 suggest:2 get:3 cannot:6 close:3 selection:1 operator:1 interior:1 put:1 nb:8 optimize:1 equivalent:1 map:1 demonstrated:1 overrepresented:1 lagrangian:1 convex:8 focused:1 shen:1 recovery:1 powerlaw:1 deriving:1 vandenberghe:1 nuclear:1 hurt:2 target:3 play:2 heavily:1 user:17 exact:2 infrequent:1 programming:3 element:3 expensive:1 particularly:1 observed:5 role:2 bottom:4 preprint:1 calculate:1 k2tr:1 counter:2 technological:1 decrease:1 complexity:18 ui:2 uniformity:1 tight:1 predictive:3 upon:1 completely:1 tca:3 easily:1 differently:2 various:1 regularizer:4 train:1 jain:1 fast:2 outside:4 neighborhood:1 abernethy:1 whose:2 quite:2 heuristic:1 rennie:2 otherwise:2 compressed:1 think:1 jointly:1 noisy:3 n2a:1 seemingly:1 cai:1 propose:1 subtracting:1 frequent:3 gen:1 iff:1 achieve:1 ktr:12 intuitive:5 frobenius:3 kv:1 bug:1 ky:13 webpage:1 convergence:1 requirement:1 optimum:2 help:1 alon:1 completion:11 pose:1 ac:1 measured:2 ij:2 lowrank:1 advocated:1 memorize:2 implies:1 direction:1 correct:1 stochastic:7 viewing:1 karatzoglou:1 generalization:12 decompose:3 anonymous:1 yij:4 correction:1 hold:3 predict:1 achieves:1 generous:1 ruslan:2 estimation:1 outperformed:1 knb:1 title:1 saw:2 sensitive:1 largest:2 weighted:33 hope:3 minimization:3 mit:2 gaussian:2 always:1 modified:1 rather:3 avoid:2 jaakkola:2 properly:1 improvement:2 rank:54 likelihood:1 consistently:1 contrast:1 rigorous:1 sigkdd:1 sense:1 helpful:1 shattered:1 entire:1 abor:1 tak:1 originating:1 tao:1 monteiro:1 overall:5 among:2 issue:1 colt:1 plan:1 constrained:1 special:1 marginal:2 equal:1 aware:1 construct:1 evgeniou:1 sampling:27 shattering:1 identical:1 k2f:4 icml:2 minimized:3 others:2 report:2 few:1 randomly:1 n1:2 invariably:1 highly:3 mnih:1 certainly:1 yielding:3 semidefinite:1 held:2 xb:10 tikk:1 bregman:1 necessary:1 orthogonal:11 incomplete:1 theoretical:2 minimal:1 eralization:1 column:11 modeling:1 kxkf:3 measuring:3 cost:2 introducing:2 entry:20 subset:3 rare:1 uniform:28 masked:1 too:6 unsurprising:1 reported:1 kn:9 synthetic:4 referring:1 recht:2 siam:2 csail:1 probabilistic:2 corrects:1 picking:1 kvj:5 na:22 squared:9 central:1 nm:3 again:2 containing:1 worse:1 cognitive:1 american:1 account:1 includes:1 matter:3 explicitly:1 view:2 root:1 analyze:1 observing:1 red:4 netflix:13 start:2 participant:1 pil:1 candes:3 rmse:1 collaborative:10 minimize:1 square:1 contribution:3 ni:3 publicly:1 variance:3 who:1 yield:6 kxb:3 bayesian:2 identification:1 accurately:1 mc:4 xdiag:2 confirmed:1 inform:1 whenever:1 aligns:1 frequency:1 proof:1 con:1 sampled:5 gain:1 dataset:6 popular:2 size1:1 recall:3 dimensionality:5 improves:1 sophisticated:1 actually:4 back:1 originally:1 formulation:3 done:1 though:1 strongly:2 generality:2 furthermore:1 just:2 xa:7 evaluated:1 smola:1 until:1 overfit:2 hand:1 working:1 kn2:2 quality:3 perhaps:3 qual:2 multifaceted:1 effect:1 normalized:3 verify:1 true:1 counterpart:1 regularization:25 analytically:2 laboratory:1 goldfarb:1 dhillon:1 interchangeably:1 evident:1 demonstrate:2 regularizing:3 recently:1 empirically:2 volume:3 discussed:3 m1:1 marginals:1 significant:5 cambridge:1 meka:1 mathematics:1 similarly:7 tracenorm:2 illinois:1 sugiyama:1 therefor:1 pq:1 imbalanced:3 recent:1 showed:1 perspective:2 forcing:1 scenario:2 compound:1 reverse:1 kxa:3 inequality:1 success:1 yi:1 seen:2 minimum:3 greater:1 dashed:4 ntt:1 match:2 calculation:1 bach:1 burer:1 equally:2 y:15 prediction:7 scalable:1 basic:1 essentially:2 iteration:1 represent:5 sponding:1 normalization:1 deterioration:5 achieved:1 singular:2 crucial:1 unlike:1 strict:2 subject:3 unorthodox:1 thing:1 seem:3 practitioner:1 near:1 presence:2 constraining:5 enough:16 easy:2 fit:11 hastie:1 perfectly:1 opposite:3 restrict:1 suboptimal:1 andriy:1 cn:5 tradeoff:6 shift:5 motivated:4 nnb:1 penalty:2 clear:1 involve:1 amount:1 concentrated:2 simplest:1 straint:1 xij:2 problematic:1 revisit:1 sizep:1 kxktr:13 tibshirani:1 blue:2 istv:1 achieving:4 neither:1 relaxation:1 sum:2 parameterized:1 place:1 almost:2 scaling:3 comparable:1 bit:2 submatrix:6 bound:6 koren:1 display:1 correspondence:2 nonglobal:1 encountered:1 constraint:10 deficiency:1 precisely:2 constrain:3 nathan:1 argument:1 min:8 strangely:1 kd:6 smaller:4 slightly:2 n4:5 rsalakhu:1 explained:1 invariant:1 restricted:1 sij:1 taken:1 computationally:3 previously:1 discus:1 turn:1 count:1 needed:2 tractable:1 unusual:1 lax:1 available:2 observe:3 appropriate:1 spectral:2 appearing:1 kashima:1 alternative:1 denotes:1 top:2 calculating:1 establish:2 corre2:1 objective:16 diagonal:2 surrogate:1 gradient:7 capacity:1 assuming:1 index:1 relationship:1 rotational:1 minimizing:12 equivalently:3 trace:77 negative:1 implementation:1 unknown:4 perform:1 recommender:1 observation:3 datasets:3 descent:4 displayed:1 truncated:2 defining:1 communication:1 rn:1 sharp:1 arbitrary:4 ttic:1 rating:11 pair:3 required:3 connection:1 learned:4 trans:1 beyond:1 suggested:3 able:3 max:11 shifting:1 power:2 overlap:2 natural:2 regularized:8 predicting:2 hindi:1 scheme:1 movie:13 rated:4 created:2 lately:1 prior:3 understanding:1 nati:1 kf:15 relative:5 law:1 loss:2 expect:3 interesting:1 filtering:8 k2d:1 srebro:5 penalization:1 validation:3 foundation:1 thresholding:1 row:16 course:2 summary:1 placed:1 surprisingly:1 free:1 last:1 supported:1 allow:7 institute:1 wide:2 explaining:1 taking:2 distributed:1 curve:13 dimension:2 world:1 kdiag:1 unweighted:7 author:1 suzuki:1 far:1 excess:6 ignore:1 uni:1 dealing:1 active:1 overfitting:3 assumed:1 spectrum:2 search:3 iterative:1 decomposes:1 table:3 learn:14 ku:1 mj:4 mazumder:1 improving:1 mse:4 kui:5 domain:1 diag:3 vj:2 weimer:5 noise:5 n2:3 repeated:2 kxkmax:3 augmented:1 fig:5 tomioka:1 sub:2 toyota:1 weighting:16 rk:4 kna:1 removing:1 specific:2 insightful:1 sensing:1 appeal:1 x:5 intractable:1 restricting:3 effectively:1 magnitude:8 illustrates:1 margin:4 kx:7 chen:1 easier:2 tc:38 logarithmic:1 generalizing:2 likely:3 appearance:1 expressed:1 nserc:1 partially:3 speculated:1 corresponds:6 truth:1 minimizer:1 acm:1 ma:2 shell:1 viewed:1 tempted:1 replace:1 considerable:1 hard:1 change:1 infinite:1 specifically:2 uniformly:5 except:1 total:1 kxs:2 experimental:1 ya:2 e:1 rarely:1 people:1 phenomenon:2 tested:1 avoiding:1 |
3,427 | 4,103 | Extensions of Generalized Binary Search to Group
Identification and Exponential Costs
Gowtham Bellala1 , Suresh K. Bhavnani2,3,4 , Clayton Scott1
Department of EECS, University of Michigan, Ann Arbor, MI 48109
2
Institute for Translational Sciences, 3 Dept. of Preventative Medicine and Community Health,
University of Texas Medical Branch, Galveston, TX 77555
4
School of Biomedical Informatics, University of Texas, Houston, TX 77030
[email protected], [email protected], [email protected]
1
Abstract
Generalized Binary Search (GBS) is a well known greedy algorithm for identifying an unknown object while minimizing the number of ?yes? or ?no? questions
posed about that object, and arises in problems such as active learning and active
diagnosis. Here, we provide a coding-theoretic interpretation for GBS and show
that GBS can be viewed as a top-down algorithm that greedily minimizes the expected number of queries required to identify an object. This interpretation is then
used to extend GBS in two ways. First, we consider the case where the objects are
partitioned into groups, and the objective is to identify only the group to which
the object belongs. Then, we consider the case where the cost of identifying an
object grows exponentially in the number of queries. In each case, we present an
exact formula for the objective function involving Shannon or R?enyi entropy, and
develop a greedy algorithm for minimizing it.
1
Introduction
In applications such as active learning [1, 2, 3, 4], disease/fault diagnosis [5, 6, 7], toxic chemical
identification [8], computer vision [9, 10] or the adaptive traveling salesman problem [11], one often
encounters the problem of identifying an unknown object while minimizing the number of binary
questions posed about that object. In these problems, there is a set ? = {?1 , ? ? ? , ?M } of M different
objects and a set Q = {q1 , ? ? ? , qN } of N distinct subsets of ? known as queries. An unknown
object ? is generated from this set ? with a certain prior probability distribution ? = (?1 , ? ? ? , ?M ),
i.e., ?i = Pr(? = ?i ), and the goal is to uniquely identify this unknown object through as few queries
from Q as possible, where a query q ? Q returns a value 1 if ? ? q, and 0 otherwise. For example,
in active learning, the objects are classifiers and the queries are the labels for fixed test points. In
active diagnosis, objects may correspond to faults, and queries to alarms. This problem has been
generically referred to as binary testing or object/entity identification in the literature [5, 12]. We
will refer to this problem as object identification. Our attention is restricted to the case where ? and
Q are finite, and the queries are noiseless.
The goal in object identification is to construct an optimal binary decision tree, where each internal
node in the tree is associated with a query from Q, and each leaf node corresponds to an object
from ?. Optimality is often with respect to the expected depth of the leaf node corresponding to
the unknown object ?. In general the determination of an optimal tree is NP-complete [13]. Hence,
various greedy algorithms [5, 14] have been proposed to obtain a suboptimal binary decision tree. A
well studied algorithm for this problem is known as the splitting algorithm [5] or generalized binary
search (GBS) [1, 2]. This is the greedy algorithm which selects a query that most evenly divides the
probability mass of the remaining objects [1, 2, 5, 15].
1
GBS assumes that the end goal is to rapidly identify individual objects. However, in applications
such as disease diagnosis, where ? is a collection of possible diseases, it may only be necessary
to identify the intervention or response to an object, rather than the object itself. In these problems, the object set ? is partitioned into groups and it is only necessary to identify the group to
which the unknown object belongs. We note below that GBS is not necessarily efficient for group
identification.
To address this problem, we first present a new interpretation of GBS from a coding-theoretic perspective by viewing the problem of object identification as constrained source coding. Specifically,
we present an exact formula for the expected number of queries required to identify an unknown
object in terms of Shannon entropy of the prior distribution ?, and show that GBS is a top-down
algorithm that greedily minimizes this cost function. Then, we extend this framework to the problem
of group identification and derive a natural extension of GBS for this problem.
We also extend the coding theoretic framework to the problem of object (or group) identification
where the cost of identifying an object grows exponentially in the number of queries, i.e., the cost
of identifying an object using d queries is ?d for some fixed ? > 1. Applications where such
a scenario arises have been discussed earlier in the context of source coding [16], random search
trees [17] and design of alphabetic codes [18], for which efficient optimal or greedy algorithms
have been presented. In the context of object/group identification, the exponential cost function has
certain advantages in terms of avoiding deep trees (which is crucial in time-critical applications)
and being more robust to misspecification of the prior probabilities. However, there does not exist
an algorithm to the best of our knowledge that constructs a good suboptimal decision tree for the
problem of object/group identification with exponential costs. Once again, we show below that GBS
is not necessarily efficient for minimizing the exponential cost function, and propose an improved
greedy algorithm that generalizes GBS.
1.1
Notation
We denote an object identification problem by a pair (B, ?) where B is a known M ? N binary
matrix with bij equal to 1 if ?i ? qj , and 0 otherwise. A decision tree T constructed on (B, ?) has a
query from the set Q at each of its internal nodes, with the leaf nodes terminating in the objects from
?. For a decision tree with L leaves, the leaf nodes are indexed by the set L = {1, ? ? ? , L} and the
internal nodes are indexed by the set I = {L + 1, ? ? ? , 2L ? 1}. At any node ?a?, let Qa ? Q denote
the set of queries that have been performed along the path from the root node up to that node. An
object ?i reaches node ?a? if it agrees with the true ? on all queries in Qa , i.e., the binary values in B
for the rows corresponding to ?i and ? are same over the columns corresponding to queries in Qa .
At any internal node a ? I, let l(a), r(a) denote the ?left? and ?right? child nodes, and let ?a ? ?
denote the set of objects that reach node ?a?. Thus, the sets ?l(a) ? ?a , ?r(a) ? ?a correspond
to the objects
in ?a that respond 0 and 1 to the query at node ?a?, respectively. We denote by
P
??a := {i:?i ??a } ?i , the probability mass of the objects reaching node ?a? in the tree. Finally, we
denote the Shannon entropy of a proportion ? ? [0, 1] by H(?)
P := ?? log2 ? ? (1 ? ?) log2 (1 ? ?)
and that of a vector ? = (?1 , ? ? ? , ?M ) by H(?) := ? i ?i log2 ?i , where we use the limit,
lim ? log2 ? = 0, to define the value of 0 log2 0.
??0
2
GBS Greedily Minimizes the Expected Number of Queries
We begin by noting that object identification reduces to the standard source coding problem [19]
in the special case when Q is complete, meaning, for any S ? ? there exists a query q ? Q such
that either q = S or ? \ q = S. Here, the problem of constructing an optimal binary decision tree
is equivalent to constructing optimal variable length binary prefix codes, for which there exists an
efficient optimal algorithm known as the Huffman algorithm [20]. It is also known that the expected
length of any binary prefix code (i.e., expected depth of any binary decision tree) is bounded below
by the Shannon entropy of the prior distribution ? [19].
For the problem of object identification, where Q is not complete, the entropy lower bound is still
valid, but Huffman coding cannot be implemented. In this case, GBS is a greedy, top-down algorithm that is analogous to Shannon-Fano coding [21, 22]. We now show that GBS is actually
greedily minimizing the expected number of queries required to identify an object.
2
First, we define a parameter called the reduction factor on the binary matrix/tree combination that
provides a useful quantification on the expected number of queries required to identify an object.
Definition 1 (Reduction factor). Let T be a decision tree constructed on the pair (B, ?). The
reduction factor at any internal node ?a? in the tree is defined by ?a = max{??l(a) , ??r(a) }/??a .
Note that 0.5 ? ?a ? 1. Given an object identification problem (B, ?), let T (B, ?) denote the set
of decision trees that can uniquely identify all the objects in the set ?. We assume that the rows of
B are distinct so that T (B, ?) 6= ?. For any decision tree T ? T (B, ?), let {?a }a?I denote the
set of reduction factors and let di denote the number of queries required to identify object ?i in the
tree. Then the expected number of queriesP
required to identify an unknown object using a tree (or,
the expected depth of a tree) is L1 (?) = i ?i di . Note that the cost function depends on both ?
and d = (d1 , ? ? ? , dM ). However, we do not show the dependence on d explicitly.
Theorem 1. For any T ? T (B, ?), the expected number of queries required to identify an unknown
object is given by
X
L1 (?) = H(?) +
??a [1 ? H(?a )].
(1)
a?I
Theorems 1, 2 and 3 are special cases of Theorem 4, whose proof is sketched in the Appendix.
Complete proofs are given in the Supplemental Material. Since H(?a ) ? 1, this theorem recovers
the result that L1 (?) is bounded below by the Shannon entropy H(?). It presents the exact formula
for the gap in this lower bound. It also follows from the above result that a tree attains the entropy
bound iff the reduction factors are equal to 0.5 at each internal node in the tree. Using this result,
minimizing L1 (?) can be formulated as the following optimization problem:
P
min H(?) + a?I ??a [1 ? H(?a )].
(2)
T ?T (B,?)
P
Since ? is fixed, this optimization problem reduces to minimizing a?I ??a [1 ? H(?a )] over
T (B, ?). As mentioned earlier, finding a global optimal solution for this optimization problem is
NP-complete [13]. Instead, we may take a top down approach and minimize the objective function
by minimizing the term Ca := ??a [1 ? H(?a )] at each internal node, starting from the root node.
Note that the only term that depends on the query chosen at node ?a? in this cost function is ?a .
Hence the algorithm reduces to minimizing ?a (i.e., choosing a split as balanced as possible) at each
internal node a ? I.
In other words, greedy minimization of (2) is equivalent to GBS. In the next section, we show how
this framework can be extended to derive greedy algorithms for the problems of group identification
and object identification with exponential costs.
3
3.1
Extensions of GBS
Group Identification
In group identification1 , the goal is not to determine the unknown object ? ? ?, rather the group to
which it belongs, in as few queries as possible. Here, in addition to B and ?, the group labels for
the objects are also provided, where the groups are assumed to be disjoint.
We denote a group identification problem by (B, ?, y), where y = (y1 , ? ? ? , yM ) denotes the group
k
labels of the objects, yi ? {1, ? ? ? , K}. Let {?k }K
k=1 be the partition of ?, where ? = {?i ? ? :
yi = k}. It is important to note here that the group identification problem cannot be simply reduced
to an object identification problem with groups {?1 , ? ? ? , ?K } as ?meta objects,? since the objects
within a group need not respond the same to each query. For instance, consider the toy example
shown in Figure 1 where the objects ?1 , ?2 and ?3 belonging to group 1 cannot be collapsed into a
single meta object as these objects respond differently to queries q1 and q3 .
In this context, we also note that GBS can fail to produce a good solution for a group identification
problem as it does not take the group labels into consideration while choosing queries. Once again,
consider the toy example shown in Figure 1 where query q2 is sufficient to identify the group of an
unknown object, whereas GBS requires 2 queries to identify the group when the unknown object is
either ?2 or ?4 . Here, we propose a natural extension of GBS to the problem of group identification.
1
Golovin et.al. [23] simultaneously studied the problem of group identification in the context of object
identification with persistent noise. Their algorithm is an extension of that in [24].
3
?1
?2
?3
?4
q1
0
1
0
1
q2
1
1
1
0
q3
1
0
0
0
Group label, y
1
1
1
2
89:;
?>=<
q1
w DDD1
w
w
DD
{ww
!
89:;
?>=<
q2 F
y=1
x
FF1
0x
F"
|xx
y=2
y=1
?
0.25
0.25
0.25
0.25
0
Figure 2: Decision tree constructed using GBS
Figure 1: Toy Example
Note that when constructing a tree for group identification, a greedy, top-down algorithm terminates
splitting when all the objects at the node belong to the same group. Hence, a tree constructed in this
fashion can have multiple objects ending in the same leaf node and multiple leaves ending in the
same group. For a tree with L leaves, we denote by Lk ? L = {1, ? ? ? , L} the set of leaves that
terminate in group k. Similar to ?k ? ?, we denote by ?ka ? ?a the set of objects belonging to
group k that reach node ?a? in a tree. Also, in addition to the reduction factor defined in Section 2,
we define a new parameter called the group reduction factor for each group k ? {1, ? ? ? , K} at each
internal node.
Definition 2 (Group reduction factor). Let T be a decision tree constructed on a group identification
problem (B, ?, y). The group reduction factor for any group k at an internal node ?a? is defined by
?ka = max{??k , ??k }/??ka .
l(a)
r(a)
Given (B, ?, y), let T (B, ?, y) denote the set of decision trees that can uniquely identify the groups
of all objects in the set ?. For any decision tree T ? T (B, ?, y), let dj denote the depth of leaf
node j ? L. Let random variable X denote the number of queries required to identify the group
of an unknown object ?. Then, the expected number of queries required to identify the group of an
unknown object using the given tree is equal to
?
?
K
K
X ??j
X
X
L1 (?) =
Pr(? ? ?k ) E[X|? ? ?k ] =
??k ?
dj ?
(3)
?
k
?
k
k=1
k=1
j?L
Theorem 2. For any T ? T (B, ?, y), the expected number of queries required to identify the group
of an unknown object is given by
"
#
K
X
X
??ka
L1 (?) = H(?y ) +
??a 1 ? H(?a ) +
H(?ka )
(4)
??a
a?I
k=1
where ?y = (??1 , ? ? ? , ??K ) denotes the probability distribution of the object groups induced by
the labels y and H(?) denotes the Shannon entropy.
Note that the term in the summation in (4) is non-negative. Hence, the above result implies that
L1 (?) is bounded below by the Shannon entropy of the probability distribution of the groups. It
also follows from this result that this lower bound is achieved iff the reduction factor ?a is equal to
0.5 and the group reduction factors {?ka }K
k=1 are equal to 1 at every internal node in the tree. Also,
note that the result in Theorem 1 is a special case of this result where each group is of size 1 leading
to ?ka = 1 for all groups at every internal node.
Using this result, the problem of finding a decision tree with minimum L1 (?) can be formulated as:
h
i
P
PK ??ka
k
min
(5)
a?I ??a 1 ? H(?a ) +
k=1 ?? H(?a ) .
T ?T (B,?,y)
a
This optimization problem being a generalized version of that in (2) is NP-complete. Hence, we
may take a top-down approach and minimize the objective function greedily by minimizing the term
P K ? ?k
??a [1 ? H(?a ) + k=1 ??a H(?ka )] at each internal node, starting from the root node. Note that
a
the terms that depend on the query chosen at node ?a? are ?a and ?ka . Hence the algorithm reduces
P K ? ?k
to minimizing Ca := 1 ? H(?a ) + k=1 ??a H(?ka ) at each internal node ?a?.
a
4
Group-GBS (GGBS)
?-GBS
Initialize: L = {root node}, Qroot = ?
while some a ? L has more than one group
Choose query q ? = arg minq?Q\Qa Ca (q)
Form child nodes l(a), r(a)
Replace ?a? with l(a), r(a) in L
end
P K ? ?k
Ca = 1 ? H(?a ) + k=1 ??a H(?ka )
Initialize: L = {root node}, Qroot = ?
while some a ? L has more than one object
Choose query q ? = arg minq?Q\Qa Ca (q)
Form child nodes l(a), r(a)
Replace ?a? with l(a), r(a) in L
end
Ca =
a
??l(a)
??a
D? (?l(a) ) +
??r(a)
? ?a
D? (?r(a) )
Figure 3: Greedy algorithm for group identifi- Figure 4: Greedy algorithm for object identification with exponential costs
cation
Note that this objective function consists of two terms, the first term [1 ? H(?a )] favors queries that
evenly distribute the probability mass of the objects at node ?a? to its child nodes (regardless of the
P ? ?k
group) while the term k ??a H(?ka ) favors queries that transfer an entire group of objects to one of
a
its child nodes. This algorithm, which we refer to as Group Generalized Binary Search (GGBS), is
summarized in Figure 3. Finally, as an interesting connection with greedy decision-tree algorithms
for multi-class classification, it can be shown that GGBS is equivalent to the decision-tree splitting
algorithm used in the C4.5 software package, based on the entropy impurity measure [25].
3.2
Exponential Costs
P
Now assume the cost of identifying an object is defined by L? (?) := log? ( i ?i ?di ), where ? > 1
and di corresponds to the depth of object ?i in a tree. In the limiting case where ? tends to 1 and ?,
this cost function reduces to the average depth and worst case depth, respectively. That is,
L1 (?) = lim L? (?) =
??1
M
X
?i d i ,
L? (?) := lim L? (?) =
???
i=1
max
i?{1,??? ,M }
di .
As mentioned in Section 2, GBS is tailored to minimize L1 (?), and hence may not produce a good
suboptimal solution for the exponential cost function with ? > 1. Thus, we derive an extension
of GBS for the problem of exponential costs. Here, we use a result by Campbell [26] which states
that the exponential cost
PL? (?) of any tree T is1bounded below by the ?-R?enyi entropy, given by
1
H? (?) := 1??
log2 ( i ?i? ), where ? = 1+log
. We consider a general object identification
2?
problem and derive an explicit formula for the gap in this lower bound. We then use this formula to
derive a family of greedy algorithms that minimize the exponential cost function L? (?) for ? > 1.
Note that the entropy bound reduces to the Shannon entropy H(?) and log2 M , in the limiting cases
where ? tends to 1 and ?, respectively.
Theorem 3. For any ? > 1 and any T ? T (B, ?), the exponential cost L? (?) is given by
X
??r(a)
??l(a)
D? (?l(a) ) +
D? (?r(a) )
?L? (?) = ?H? (?) +
??a (? ? 1)?da ? D? (?a ) +
??a
??a
a?I
where da denotes the depth of any internal node ?a? in the tree, ?a denotes the set of objects that
hP
? i1/?
P
?i
1
reach node ?a?, ??a =
?i , ? = 1+log
and
D
(?
)
:=
.
?
a
{i:?i ??a } ??
?
{i:?i ??a }
2
a
The term in summation over internal nodes I in the above result corresponds to the gap in the
Campbell?s lower bound. This result suggests a top-down greedy approach to minimize L? (?),
??
??
which is to minimize the term (? ? 1)?da ? D? (?a ) + ??l(a) D? (?l(a) ) + ??r(a) D? (?r(a) ) at
a
a
each internal node, starting from the root node. Noting that the terms that depend on the query
chosen at node ?a? are ??l(a) , ??r(a) , D? (?l(a) ) and D? (?r(a) ), this reduces to minimizing Ca :=
??l(a)
??
D? (?l(a) ) + ??r(a) D? (?r(a) ) at each internal node. This algorithm, which we refer to as
a
?-GBS, can be summarized as shown in Figure 4. Also, it can be shown by the application of
L?H?opital?s rule that in the limiting case where ? ? 1, ?-GBS reduces to GBS, and in the case
where ? ? ?, ?-GBS reduces to GBS with uniform prior ?i = 1/M . The latter algorithm is GBS
but with the true prior ? replaced by a uniform distribution.
? ?a
5
GBS
GGBS
Entropy bound
Expected # of queries
9
10
8
?<1
?>1
6
4
?=1
8
7
6
5
4
2
8
0.5
0.6
0.7
0.8
0.9
4
2
?w
1
1 0.95
0.75
0.951
0.5 0.75
2
4
8
?b
Figure 5: Beta distribution over the range [0.5, 1] Figure 6: Expected number of queries required to identify
the group of an object using GBS and GGBS
for different values of ? when ? = 1
3.3
Group Identification with Exponential Costs
Finally, we complete our discussion by considering the problem of group identification with exponential costs. Here, the cost
of identifying the
group of an object given a tree T ? T (B, ?, y), is
P
dj
defined to be L? (?) = log?
, which reduces to (3) in the limiting case as ? ? 1,
?
?
j?L ?j
and to maxj?L dj , i.e., the worst case depth of the tree, in the case where ? ? ?.
Theorem 4. For any ? > 1 and any T ? T (B, ?, y), the exponential cost L? (?) of identifying the
group of an object is given by
X
??r(a)
??l(a)
L? (?)
H? (?y )
da
D? (?l(a) ) +
D? (?r(a) )
?
=?
+
??a (? ? 1)? ? D? (?a ) +
??a
??a
a?I
where ?y = (??1 , ? ? ? , ??K ) denotes the probability distribution of the object groups induced by
hP
? k ? i1/?
?a
K
1
the labels y, D? (?a ) :=
with ? = 1+log
k=1 ??
?.
2
a
Note that the definition of D? (?a ) in this theorem is a generalization of that in Theorem 3. As
mentioned earlier, Theorems 1-3 are special cases of the above theorem, where Theorem 2 follows
as ? ? 1 and Theorem 1 follows when each group is of size one in addition. This result also
implies a top-down, greedy algorithm to minimize L? (?), which is to choose a query that minimizes
??
??
Ca := ??l(a) D? (?l(a) ) + ??r(a) D? (?r(a) ) at each internal node. Once again, it can be shown by
a
a
the application of L?H?opital?s rule that in the limiting case where ? ? 1, this reduces to GGBS, and
in the case where ? ? ?, this reduces to choosing a query that minimizes the maximum number of
groups in the child nodes [27].
4
Performance of the Greedy Algorithms
We compare the performance of the proposed algorithms to that of GBS on synthetic data generated
using different random data models.
4.1
Group Identification
For fixed M = |?| and N = |Q|, we consider a random data model where each query q ? Q is
associated with a pair of parameters (?w (q), ?b (q)) ? [0.5, 1]2 . Here, ?w (q) reflects the correlation
of the object responses within a group, and ?b (q) captures the correlation of object responses between
groups. When ?w (q) is close to 0.5, each object within a group is equally likely to exhibit 0 or 1
as its response to query q, whereas, when it is close to 1, most of the objects within a group are
highly likely to exhibit the same query response. Similarly, when ?b (q) is close to 0.5, each group
is equally likely to exhibit 0 or 1 as its response to the query, where a group response corresponds
to the majority vote of the object responses within a group, while, as ?b (q) tends to 1, most of the
6
Average Exponential cost, L?(?)
?=1
?=2
11
14
10
12
10
9
8
8
GBS
GBS?Uniform
??GBS
Entropy bound
6
7
4
0
10
1
10
2
10
?
3
10
4
10
5
0
10
10
1
10
2
10
?
3
10
4
10
5
10
Figure 7: Exponential cost incurred in identifying an object using GBS and ?-GBS
groups are highly likely to exhibit the same response. Given these correlation values (?w (q), ?b (q))
for a query q, the object responses to query q (i.e., the binary column of 0?s and 1?s corresponding
to query q in B) are generated as follows
1. Flip a fair coin to generate a Bernoulli random variable, x
2. For each group k ? {1, ? ? ? , K}, assign a binary label bk , where bk = x with probability ?b (q)
3. For each object in group k, assign bk as the object response to q with probability ?w (q)
Given the correlation parameters (?w (q), ?b (q)), ?q ? Q, a random dataset can be created by following the above procedure for each query.
We compare the performances of GBS and GGBS on random datasets generated using the above
model. We demonstrate the results on datasets of size N = 200 (# of queries) and M = 400
(# of objects), where we randomly partitioned the objects into 15 groups and assumed a uniform
prior on the objects. For each dataset, the correlation parameters are drawn from independent beta
distributions over the range [0.5, 1], i.e., ?w (q) ? Beta(1, ?w ) and ?b (q) ? Beta(1, ?b ) where
?w , ?b ? {0.5, 0.75, 0.95, 1, 2, 4, 8}. Figure 5 shows the density function (pdf) of Beta(1, ?) for
different values of ?. Note that ? = 1 corresponds to a uniform distribution, while, for ? < 1 the
distribution is right skewed and for ? > 1 the distribution is left skewed.
Figure 6 compares the mean value of the cost function L1 (?) for GBS and GGBS over 100 randomly
generated datasets, for each value of (?w , ?b ). This shows the improved performance of GGBS over
GBS in group identification. Especially, note that GGBS achieves performance close to the entropy
bound as ?w decreases. This is due to the increased number of queries with ?w (q) close to 1 in the
dataset. As the correlation parameter ?w (q) tends to 1, choosing that query keeps the groups intact,
i.e., the group reduction factors ?ka tend to 1 for these queries. Such queries offer significant gains
in group identification, but can be overlooked by GBS.
4.2
Object Identification with Exponential Costs
We consider the same random data model as above where we set K = M , i.e., each group is
comprised of one object. Thus, the only correlation parameter that determines the structure of the
dataset is ?b (q), q ? Q. Figure 7 demonstrates the improved performance of ?-GBS over standard
GBS, and GBS with uniform prior, over a range of ? values, for a dataset generated using the above
random data model with ?b (q) ? Beta(1, 1) = unif[0.5, 1]. Each curve in the figure corresponds
to the average value of the cost function L? (?) as a function of ? over 100 repetitions. In each
PM
repetition, the prior is generated according to Zipf?s law, i.e., (j ?? / i=1 i?? )M
j=1 , ? ? 0, after
randomly permuting the objects. Note that in the special case when ? = 0, this reduces to the
uniform distribution and as ? increases, it tends to a skewed distribution with most of the probability
mass concentrated on few objects.
Similar experiments have been performed on datasets generated using ?b (q) ? Beta(?, ?) for different values of ?, ?. In all our experiments, we observed ?-GBS to be consistently performing better
than both the standard GBS, and GBS with uniform prior. In addition, the performance of ?-GBS
has been observed to be very close to that of the entropy bound. Finally, Figure 7 also reflects that
?-GBS converges to GBS as ? ? 1, and to GBS with uniform prior as ? ? ?.
7
5
Conclusions
In this paper, we show that generalized binary search (GBS) is a top-down algorithm that greedily
minimizes the expected number of queries required to identify an object. We then use this interpretation to extend GBS in two ways. First, we consider the case where the objects are partitioned
into groups, and the goal is to identify only the group of the unknown object. Second, we consider
the problem where the cost of identifying an object grows exponentially in the number of queries.
The algorithms are derived in a common framework. In particular, we prove the exact formulas for
the cost function in each case that close the gap between previously known lower bounds related to
Shannon and R?enyi entropy. These exact formulas are then optimized in a greedy, top-down manner
to construct a decision tree. We demonstrate the improved performance of the proposed algorithms
over GBS through simulations. An important open question and the direction of our future work is
to relate these greedy algorithms to the global optimizer of their respective cost functions.
Acknowledgements
G. Bellala and C. Scott were supported in part by NSF Awards No. 0830490 and 0953135. S.
Bhavnani was supported in part by CDC/NIOSH grant No. R21OH009441.
6
Appendix: Proof Sketch for Theorem 4
e ? and H
e ? as
Define two new functions L
?
?
?
?
dj ?1
X
X
X
1
e ? := 1 ?
e ? := 1 ?
L
?h ? and H
??j ?dj ? 1? =
??j ?
P
?1 ,
??1
K
?
j?L
j?L
h=0
k=1 ??k
e ? is related to the cost function L? (?) as ?L? (?) = (? ? 1)L
e ? + 1, and H
e ? is related to the
where L
?-R?enyi entropy H? (?y ) as
! ?1
K
K
K
X
X
X
1
1
?
?
?
(6a)
H? (?y ) =
log2
??
log2
??
??
k =
k = log?
k
1??
? log2 ?
k=1
k=1
k=1
! ?1
! ?1
K
K
X
X
e? + 1
=? ?H? (?y ) =
?? k
=
?? k
H
(6b)
?
k=1
?
k=1
1
where we use the definition of ?, i.e., ? = 1+log
in (6a). Now, we note from Lemma 1 that
2?
X
X
e? =
L
?da ??a =? ?L? (?) = 1 +
(? ? 1)?da ??a
a?I
(7)
a?I
where da denotes the depth of internal node ?a? in the tree T . Similarly, we note from (6b) and
Lemma 2 that
X
?H? (?y ) = 1 +
??a D? (?a ) ? ??l(a) D? (?l(a) ) ? ??r(a) D? (?r(a) ) .
(8)
a?I
Finally, the result follows from (7) and (8) above.
e ? can be decomposed over the internal nodes in a tree T , as L
e? =
Lemma 1. The function L
P
da
?
?
,
where
d
denotes
the
depth
of
internal
node
a
?
I
and
?
is
the
probability
mass
?
a
?
a
a
a?I
of the objects at that node.
e ? can be decomposed over the internal nodes in a tree T , as
Lemma 2. The function H
X
1
e? =
??a D? (?a ) ? ??l(a) D? (?l(a) ) ? ??r(a) D? (?r(a) )
H
?1
P
K
?
a?I
k=1 ??k
hP
? k ? i ?1
?a
K
where D? (?a ) :=
and ??a denotes the probability mass of the objects at any
k=1 ??a
internal node a ? I.
The above two lemmas can be proved using induction over subtrees rooted at any internal node ?a?
in the tree. The details may be found in the Supplemental Material.
8
References
[1] S. Dasgupta, ?Analysis of a greedy active learning strategy,? Advances in Neural Information Processing
Systems, 2004.
[2] R. Nowak, ?Generalized binary search,? Proceedings of the 46th Allerton Conference on Communications,
Control and Computing, pp. 568?574, 2008.
[3] ??, ?Noisy generalized binary search,? Advances in Neural Information Processing Systems, vol. 22,
pp. 1366?1374, 2009.
[4] D. Golovin and A. Krause, ?Adaptive Submodularity: A new approach to active learning and stochastic
optimization,? In Proceedings of International Conference on Learning Theory (COLT), 2010.
[5] D. W. Loveland, ?Performance bounds for binary testing with arbitrary weights,? Acta Informatica, 1985.
[6] F. Yu, F. Tu, H. Tu, and K. Pattipati, ?Multiple disease (fault) diagnosis with applications to the QMR-DT
problem,? Proceedings of IEEE International Conference on Systems, Man and Cybernetics, vol. 2, pp.
1187?1192, October 2003.
[7] J. Shiozaki, H. Matsuyama, E. O?Shima, and M. Iri, ?An improved algorithm for diagnosis of system
failures in the chemical process,? Computational Chemical Engineering, vol. 9, no. 3, pp. 285?293, 1985.
[8] S. Bhavnani, A. Abraham, C. Demeniuk, M. Gebrekristos, A. Gong, S. Nainwal, G. Vallabha, and
R. Richardson, ?Network analysis of toxic chemicals and symptoms: Implications for designing firstresponder systems,? Proceedings of American Medical Informatics Association, 2007.
[9] D. Geman and B. Jedynak, ?An active testing model for tracking roads in satellite images,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 1, pp. 1?14, 1996.
[10] M. J. Swain and M. A. Stricker, ?Promising directions in active vision,? International Journal of Computer
Vision, vol. 11, no. 2, pp. 109?126, 1993.
[11] A. Gupta, R. Krishnaswamy, V. Nagarajan, and R. Ravi, ?Approximation algorithms for optimal decision
trees and adaptive TSP problems,? 2010, available online at arXiv.org:1003.0722.
[12] M. Garey, ?Optimal binary identification procedures,? SIAM Journal on Applied Mathematics, vol. 23(2),
pp. 173?186, 1972.
[13] L. Hyafil and R. Rivest, ?Constructing optimal binary decision trees is NP-complete,? Information Processing Letters, vol. 5(1), pp. 15?17, 1976.
[14] S. R. Kosaraju, T. M. Przytycka, and R. S. Borgstrom, ?On an optimal split tree problem,? Proceedings of
6th International Workshop on Algorithms and Data Structures, WADS, pp. 11?14, 1999.
[15] R. M. Goodman and P. Smyth, ?Decision tree design from a communication theory standpoint,? IEEE
Transactions on Information Theory, vol. 34, no. 5, 1988.
[16] P. A. Humblet, ?Generalization of Huffman coding to minimize the probability of buffer overflow,? IEEE
Transactions on Information Theory, vol. IT-27, no. 2, pp. 230?232, March 1981.
[17] F. Schulz, ?Trees with exponentially growing costs,? Information and Computation, vol. 206, 2008.
[18] M. B. Baer, ?R?enyi to R?enyi - source coding under seige,? Proceedings of IEEE International Symposium
on Information Theory, pp. 1258?1262, July 2006.
[19] T. M. Cover and J. A. Thomas, Elements of Information Theory.
John Wiley, 1991.
[20] D. A. Huffman, ?A method for the construction of minimum-redundancy codes,? Proceedings of the
Institute of Radio Engineers, 1952.
[21] C. E. Shannon, ?A mathematical theory of communication,? Bell Systems Technical Journal, vol. 27, pp.
379 ? 423, July 1948.
[22] R. M. Fano, Transmission of Information.
MIT Press, 1961.
[23] D. Golovin, D. Ray, and A. Krause, ?Near-optimal Bayesian active learning with noisy observations,? to
appear in the Proceedings of the Neural Information Processing Systems (NIPS), 2010.
[24] S. Dasgupta, ?Coarse sample complexity bounds for active learning,? Advances in Neural Information
Processing Systems, 2006.
[25] G. Bellala, S. Bhavnani, and C. Scott, ?Group-based query learning for rapid diagnosis in time-critical
situations,? Tech. Rep., 2009, available online at arXiv.org:0911.4511.
[26] L. L. Campbell, ?A coding problem and R?enyi?s entropy,? Information and Control, vol. 8, no. 4, pp.
423?429, August 1965.
[27] G. Bellala, S. Bhavnani, and C. Scott, ?Query learning with exponential query costs,? Tech. Rep., 2010,
available online at arXiv.org:1002.4019.
9
| 4103 |@word version:1 proportion:1 unif:1 open:1 simulation:1 q1:4 reduction:12 prefix:2 ka:14 com:1 gmail:1 john:1 partition:1 greedy:20 leaf:10 intelligence:1 provides:1 coarse:1 node:57 allerton:1 org:3 mathematical:1 along:1 constructed:5 beta:7 przytycka:1 symposium:1 persistent:1 consists:1 prove:1 ray:1 manner:1 expected:16 rapid:1 growing:1 multi:1 decomposed:2 considering:1 begin:1 provided:1 bounded:3 notation:1 xx:1 mass:6 rivest:1 minimizes:6 q2:3 supplemental:2 finding:2 every:2 classifier:1 demonstrates:1 control:2 medical:2 intervention:1 grant:1 appear:1 engineering:1 tends:5 limit:1 qmr:1 path:1 acta:1 studied:2 suggests:1 range:3 jedynak:1 testing:3 procedure:2 suresh:1 bell:1 word:1 road:1 cannot:3 close:7 hyafil:1 wad:1 context:4 collapsed:1 equivalent:3 attention:1 starting:3 minq:2 regardless:1 iri:1 identifying:10 splitting:3 rule:2 analogous:1 limiting:5 construction:1 exact:5 smyth:1 designing:1 element:1 identifi:1 geman:1 observed:2 capture:1 worst:2 decrease:1 disease:4 mentioned:3 balanced:1 complexity:1 terminating:1 depend:2 impurity:1 differently:1 various:1 tx:2 ff1:1 enyi:7 distinct:2 query:62 choosing:4 whose:1 posed:2 otherwise:2 favor:2 richardson:1 itself:1 noisy:2 tsp:1 online:3 advantage:1 propose:2 tu:2 rapidly:1 iff:2 alphabetic:1 transmission:1 satellite:1 produce:2 converges:1 object:101 derive:5 develop:1 gong:1 school:1 implemented:1 implies:2 direction:2 submodularity:1 stochastic:1 viewing:1 material:2 assign:2 nagarajan:1 generalization:2 summation:2 extension:6 pl:1 achieves:1 optimizer:1 label:8 radio:1 agrees:1 repetition:2 reflects:2 minimization:1 mit:1 rather:2 reaching:1 q3:2 derived:1 consistently:1 bernoulli:1 tech:2 greedily:6 attains:1 entire:1 schulz:1 selects:1 i1:2 sketched:1 translational:1 arg:2 classification:1 colt:1 constrained:1 special:5 initialize:2 equal:5 construct:3 once:3 yu:1 future:1 np:4 few:3 randomly:3 simultaneously:1 individual:1 maxj:1 replaced:1 highly:2 generically:1 permuting:1 subtrees:1 implication:1 nowak:1 necessary:2 respective:1 tree:50 indexed:2 divide:1 instance:1 column:2 earlier:3 increased:1 cover:1 cost:35 subset:1 uniform:9 comprised:1 swain:1 shima:1 eec:1 synthetic:1 density:1 international:5 siam:1 informatics:2 ym:1 again:3 clayscot:1 choose:3 american:1 leading:1 return:1 toy:3 distribute:1 coding:11 summarized:2 explicitly:1 depends:2 performed:2 root:6 minimize:8 correspond:2 identify:22 yes:1 identification:36 bayesian:1 cybernetics:1 cation:1 reach:4 definition:4 failure:1 pp:13 garey:1 dm:1 associated:2 mi:1 di:5 proof:3 recovers:1 gain:1 dataset:5 proved:1 knowledge:1 lim:3 actually:1 campbell:3 dt:1 response:11 improved:5 symptom:1 biomedical:1 correlation:7 traveling:1 sketch:1 grows:3 true:2 hence:7 chemical:4 skewed:3 uniquely:3 rooted:1 generalized:8 loveland:1 pdf:1 theoretic:3 complete:8 demonstrate:2 l1:11 meaning:1 image:1 consideration:1 common:1 preventative:1 exponentially:4 extend:4 interpretation:4 discussed:1 belong:1 association:1 refer:3 significant:1 zipf:1 pm:1 fano:2 hp:3 similarly:2 mathematics:1 dj:6 krishnaswamy:1 perspective:1 bhavnani:4 belongs:3 scenario:1 certain:2 buffer:1 meta:2 binary:23 opital:2 qroot:2 fault:3 rep:2 yi:2 kosaraju:1 minimum:2 houston:1 determine:1 july:2 branch:1 multiple:3 reduces:13 technical:1 determination:1 offer:1 equally:2 award:1 involving:1 vision:3 noiseless:1 arxiv:3 tailored:1 achieved:1 huffman:4 addition:4 whereas:2 krause:2 source:4 crucial:1 goodman:1 standpoint:1 induced:2 tend:1 near:1 noting:2 split:2 suboptimal:3 texas:2 qj:1 gb:56 deep:1 useful:1 concentrated:1 informatica:1 reduced:1 generate:1 exist:1 nsf:1 disjoint:1 diagnosis:7 dasgupta:2 vol:12 group:83 redundancy:1 drawn:1 ravi:1 package:1 letter:1 respond:3 family:1 decision:21 appendix:2 bound:14 software:1 optimality:1 min:2 performing:1 department:1 according:1 combination:1 march:1 belonging:2 terminates:1 partitioned:4 toxic:2 restricted:1 pr:2 previously:1 fail:1 flip:1 end:3 umich:2 salesman:1 generalizes:1 available:3 encounter:1 coin:1 thomas:1 top:10 remaining:1 assumes:1 denotes:9 log2:10 medicine:1 especially:1 overflow:1 objective:5 question:3 strategy:1 dependence:1 exhibit:4 bellala:3 entity:1 majority:1 evenly:2 induction:1 code:4 length:2 minimizing:12 october:1 relate:1 negative:1 design:2 unknown:16 observation:1 datasets:4 gowtham:2 finite:1 situation:1 extended:1 communication:3 misspecification:1 y1:1 ww:1 arbitrary:1 august:1 community:1 overlooked:1 clayton:1 bk:3 pair:3 required:12 connection:1 optimized:1 c4:1 nip:1 qa:5 address:1 below:6 pattern:1 scott:3 max:3 critical:2 natural:2 quantification:1 lk:1 created:1 health:1 prior:11 literature:1 acknowledgement:1 law:1 cdc:1 interesting:1 incurred:1 sufficient:1 dd:1 row:2 borgstrom:1 supported:2 institute:2 curve:1 depth:11 valid:1 ending:2 qn:1 collection:1 adaptive:3 transaction:3 keep:1 global:2 active:11 assumed:2 search:8 promising:1 terminate:1 transfer:1 robust:1 ca:8 golovin:3 necessarily:2 constructing:4 da:8 pk:1 abraham:1 noise:1 alarm:1 child:6 fair:1 referred:1 fashion:1 wiley:1 explicit:1 exponential:19 stricker:1 bij:1 down:10 formula:7 theorem:15 gupta:1 exists:2 workshop:1 gap:4 entropy:20 michigan:1 simply:1 likely:4 tracking:1 corresponds:6 determines:1 viewed:1 goal:5 formulated:2 ann:1 replace:2 man:1 specifically:1 lemma:5 engineer:1 called:2 arbor:1 shannon:11 vote:1 intact:1 internal:25 latter:1 arises:2 dept:1 d1:1 avoiding:1 |
3,428 | 4,104 | Size Matters: Metric Visual Search Constraints from
Monocular Metadata
Mario Fritz
UC Berkeley EECS & ICSI
Kate Saenko
UC Berkeley EECS & ICSI
Trevor Darrell
UC Berkeley EECS & ICSI
Abstract
Metric constraints are known to be highly discriminative for many objects, but
if training is limited to data captured from a particular 3-D sensor the quantity of
training data may be severly limited. In this paper, we show how a crucial aspect of
3-D information?object and feature absolute size?can be added to models learned
from commonly available online imagery, without use of any 3-D sensing or reconstruction at training time. Such models can be utilized at test time together
with explicit 3-D sensing to perform robust search. Our model uses a ?2.1D?
local feature, which combines traditional appearance gradient statistics with an
estimate of average absolute depth within the local window. We show how category size information can be obtained from online images by exploiting relatively
unbiquitous metadata fields specifying camera intrinstics. We develop an efficient metric branch-and-bound algorithm for our search task, imposing 3-D size
constraints as part of an optimal search for a set of features which indicate the
presence of a category. Experiments on test scenes captured with a traditional
stereo rig are shown, exploiting training data from from purely monocular sources
with associated EXIF metadata.
1
Introduction
Two themes dominate recent progress towards situated visual object recognition. Most significantly,
the availability of large scale image databases and machine learning methods has driven performance: accuracy on many category detection tasks is a function of the quantity and quality of the
available training data. At the same time, when we consider situated recognition tasks, i.e., as
performed by robots, autonomous vehicles, and interactive physical devices (e.g., mobile phones),
it is apparent that the variety and number of sensors is often what determines performance levels:
e.g., the avaibility of 3-D sensing can significantly improve performance on specific practical tasks,
irrespective of the amount of training data. A rich variety of 3-D sensors are available on modern
robotic systems, yet the training data are few for most 3-D sensor regimes: the vast majority of
available online visual category data are from monocular sources and there are few databases of
real-world 3-D scans from which to train robust visual recognizers. In general it is, however, difficult to reconcile these two trends: while one would like to use all available sensors at test time,
the paucity of 3D training data will mean few categories are well-defined with full 3-D models, and
generalization performance to new categories which lack 3-D training data may be poor. In this
paper, we propose a method to bridge this gap and extract features from typical 2D data sources that
can enhance recognition performance when 3D information is available at test time.
1
Figure 1: Recovery of object size from known camera intrinsics
The paradigm of recognition-by-local-features has been well established in the computer vision
literature in recent years. Existing recognition schemes are designed generally to be invariant to
scale and size. Local shape descriptors based on 3-D sensing have been proposed (e.g., VIP [2]),
as well as local 3-D descriptors (e.g., 3-D shape context and SIFT [4, 3]), but we are somewhat
skeptical of the ability of even the most recent 3-D sensor systems to extract the detailed local
geometry required to reliably detect and describe local 3-D shapes on real world objects.
Instead of extracting full 3D local features, we propose a ?2.1D? local feature model which augments
a traditional 2D local feature (SIFT, GLOH, SURF, etc.) with an estimate of the depth and 3-D size
of an observed patch. Such features could distinguish, for example, the two different keypad patterns
on a mobile device keyboard vs. on a full-size computer keyboard; while the keys might look locally
similar, the absolute patch size would be highly distinctive. We focus on the recognition of realworld objects when additional sensors are available at test time, and show how 2.1D information
can be extracted from monocular metadata already present in many online images. Our model
includes both a representation of the absolute size of local features, and of the overall dimension
of categories. We recover the depth and size of the local features, and thus of the bounding box of
a detected object in 3-D. Efficient search is an important goal, and we show a novel extension to
multi-class branch-and-bound search using explicit metric 3-D constraints.
2
Recognition with ?2.1D? features
The crux of our method is the inference and exploitation of size information; we show that we
can obtain such measurements from non-traditional sources that do not presume a 3-D scanner at
training time, nor rely on multi-view reconstruction / structure-from-motion methods. We instead
exploit cues that are readily available in many monocular camera images.1 We are not interested
in reconstructing the object surface, and only estimate the absolute size of local patches, and the
statistics of the bounding box of instances in the category; from these quantities we can infer the
category size.
We adopt a local-feature based recognition model and augment it with metric size information.
While there are several possible local feature recognition schemes based on sets of such local features, we focus on the Naive Bayes nearest-neighbor model of [1] because of its simplicity and
good empirical results. We assume one or more common local feature descriptors (and associated
detectors or dense sampling grids): SIFT, SURF, GLOH, MSER. Our emphasis in this paper is on
1
There are a number of general paradigms by which estimates of object size can be extracted from a 2D
image data source, e.g., regression from scene context [6]), or inference of depth-from-a-single-image [7, 11,
16]. In addition to such schemes, text associated with the training images extracted from internet merchants
(e.g., Amazon, eBay) typically explicitly defines a bounding volume for the object. While all these are of
interest, we consider here only the use of methods based implicitly on depth-from-focus (e.g., [8]), present
as camera intrinsics stored as metadata in the JPEG EXIF file format. Images collected by many modern
consumer-grade digital SLR cameras automatically store absolute distance-to-subject as metadata in the JPEG
image.
2
Figure 2: Illustration of metric object size derived from image metadata stored in EXIF fields on an
image downloaded from Flickr.com. Absolute size is estimated by projecting bounding box of local
features on object into 3-D using EXIF camera intrinsics stored in image file format.
improving the accuracy of recognizing categories that are at least approximately well modeled with
such-local feature schemes; size information alone cannot help recognize a category that does not
repeatably and reliably produce such features.
2.1
Metric object size from monocular metadata
Absolute pixel size can be infered using a planar object approximation and depth from focus cues.
Today?s digital cameras supplement the image data with rich meta-data provided in the EXIF format.
EXIF stores a wide range of intrinsic camera parameters, which often include the focus distance as
an explicit parameter (in some cameras it is not provided directly, but can be estimated from other
provided parameters). This gives us a workable approximation of the depth of the object, assuming
it is in focus in the scene: with a pinhole camera model, we can derive the metric size of a pixel
in the scene given these assumptions. Using simple trigonometry, the metric pixel size is ? = fsdr ,
where s is the sensor width, d is the focus distance, f is the focal length, and r is the horizontal
resolution of the sensor.
As shown in Figure 2, this method provides a size estimate reference for the visual observation based
on images commonly available on the internet, e.g., Flickr.com. A bounding box can either be estimated from the feature locations, given an uncluttered background, or provided by manual labeling
or by an object discovery technique which clusters local features to discover the segmentation of the
training data.
2.2
Naive Bayes estimation of discriminative feature weights
Our object model is based on a bag-of-words model where an object is encoded by a set of visual
features xi ? X within the circumscribing bounding box. Our size-constrained learning scheme
is applicable to a range of recognition methods; for simplicity we adopt a simple but efficient nonparametric naive Bayes scheme. We denote object appearance with p(X|C); following [1], this
density can be captured and modeled using Parzen window density estimates:
3
Figure 3: Metric object size for ten different categories derived from camera metadata. Bold symbols
depict ground truth obtained by direct physical measurement of category instance.
p?(x|C) =
N
1 X
K(x ? xC
j ),
N j=1
(1)
where K(.) is a Gaussian kernel.
We extend this model in a discriminative fashion similar to [18]. We compute the detection score for
a given bounding box from the log-likelihood ratio computed based on the kernel density estimate
from above. Assuming independence of the features, the class specfic probabilities are factorized to
obtain a sum of individual feature contributions:
log
p(X|C)
?
p(X|C)
Q
p(xi |C)
log Qi
?
i p(xi |C)
X
?
=
log(p(xi |C) ? log(p(xi |C))
=
(2)
(3)
i
As shown in [1], an approximate density based only on the nearest neighbor is accurate for many
recognition tasks. This further simplifies the computation and approximates the class specific feature
probabilities by:
log p(xi |C) ? ||xi ? N NC (xi )||2 ,
(4)
where N NC (xi ) represents the distance of data point xi to the nearest example in the training data
of class C. In the multi-class case, each feature xi is compared to the nearest neighbors in the
training examples of each class, N NC? (xi ) can be simply obtained as the minimum of all retrieved
nearest neighbors except those in C.
3
Efficient search with absolute size
Recently, a class of algorithms for efficient detection based on local features has been proposed
[19, 20, 21]; these search for the highest-scoring bounding box given the observed features X and a
4
scoring function f using an efficient branch-and-bound scheme. These methods can be formulated
as an optimization b = arg maxb f (b), where b = (x1 , y1 , x2 , y2 ) is a bounding box. The core idea
is to structure the search space using a search tree. The top node contains the set of all possible
bounding boxes. The child nodes contain splits of the set of bounding boxes in the parent node. The
leafs contain single bounding boxes. If it is possible to derive lower and upper bounds for rectangle
sets at the nodes, a branch and bound technique can be applied to quickly prune nodes if its upper
bound is lower than the lower bound of a previously visited node.
Bounds can be easily computed for bag-of-words representations, which have been previously used
in this context for object detection. Each feature has a learned weight wj , wherefore the score
function f reads:
f (r) =
X
wj ,
(5)
j?T (b)
where T (b) is the set of all features contained in the bounding box b.
While previous approaches have derived the feature weight from SVM training, we propose to use
likelihood ratios which are derived in a non-parametric fashion.
We further extend this method to search for objects in 3d. Our bounding box hypotheses b =
(x1 , y1 , z1 , x2 , y2 , z2 ) are defined explicitly in 3d and indicate the actual spatial relation of objects
in the scene.
We employ a constraint factor S(b) to the objective that indicates if a bounding box has a valid size
given a particular class or not:
f (r) =
X
wj S(b)
(6)
j?T (b)
S(b) = 1 is a basic rectangle function that takes the value 1 for valid bounding boxes and 0 otherwise.
Most importantly, bounds over bounding box sets can still be efficiently computed. As long as the
bounding box set at a given node in the search tree contains at least one bounding box of valid size,
the score is unaffected. When there is no valid rectangle left, the score evaluates to zero and that
node as well as the associated sub space of the search problem gets pruned.
At test time, it is anticipated that 3D observations are directly available via LIDAR scans or active
or passive stereo estimation. Given these measurements, we constrain the search to leverage the
metric information acquired at training time. The depth for each feature in the image at test time
allows us to infer their 3D location in the test scene. We can thus extend efficient multi-class branchand-bound search to operate in metric 3D space under the constraints imposed by our knowledge of
metric patch size and metric object size.
We also make use of the proposed multi-class branch-and-bound scheme as proposed in [20]. We
not only split bounding box sets along dimensions, but also split the set of object classes. This leads
to a simultaneous search scheme for multiple classes.
4
Related Work
Many methods have been proposed to deal with the problem of establishing feature correspondence
across varying image scales. Lowe et. al. proposed to up/downsample an image at multiple scales
and identify the characterstic scale for each image patch [9]. A histogram of edge orientations is
computed for each patch scaled to its characteristic scale in order to obtain a scale-invariant visual
descriptor. [10] identifies scale invariant regions by interatively expanding consistent regions with
an increasing intensity threshold until they become ?stable?. The size of the stable region is the
charactersitic scale for the feature. With both methods, a feature in one image can be mapped to the
same characteristic scale a feature in another image. Since both features are mapped to the same
5
scale, an ?apple-to-apple? comparison can be performed. In contrast, our method does not require
such a mapping. Instead, it determines the metric size of any image patch and uses it to compare
two features directly.
There have been several works on estimating depth from single images. Some very early work
estimated depth from the degree of the defocus of edges [8]. [6] describes a method to infer scene
depth from structure baesd on global and local histograms of Gabor filter responses for indoor and
outdoor scenes. [11] describes a supervised Markov Random Field method to predict the depth from
local and global features for outdoor images. In our work, we focus on indoor office scenes with
finer granularity. Hardware-based methods for obtaining 3D information from monocular images
include modifying the structure of a conventional camera to enable it to capture 3D geometry. For
example, [12] introduces the coded aperture technique by inserting a patterned occluder within the
aperture of the camera lenses. Images captured by such a camera exhibit depth-dependent patterns
from which a layered depth map can be extracted.
Most methods based on visual feature quantization learn their codebooks using invariant features.
However, the scale of each code word is lost after each image patch is normalized to its invariant region. Thus, it is possible for two features to match because they happen to look similar, even though
in the physical world they actually have two different sizes. For example, an eye of a dinosaur may
be confused with an eye of a fish, because their size difference is lost once they are embedded into
the visual code book. There have been some proposals to deal with this problem. For example,
[13] records the relative position of the object center in the codebook, and at test time each codebook word votes for the possible object center at multiple scales. Moreover, [14] explicitly put the
orientation and scale of each feature in the codebook, so that object center location can be inferred
directly. However, these works treat orientation and scale as independent of the feature descriptor
and use them to post-verify whether a feature found to be consistent in terms of the appearance
desciptor would also be consistent in terms of scale. In contrast, our work directly embeds the scale
attribute into the visual descriptor. A visual word would be matched only if its size is right. In other
words, the visual apperance and the scale are matched simulaneously in our codebook.
Depth information has been used to improve the performance of various image processing tasks,
such as video retrieval, object instance detection, 3D scene recognition, and vehicle navigation.
For example, [15] used depth feature for video retrieval, extracting depth from monocular video
sequences by exploiting the motion parallax of the objects in the video. [16] developed an intergrated
probablistic model for apperance and 3D geometry of object categories. However, their method
does not expliclty assign physical size to each image patch and needs to provide scale-invariance by
explictly calculating the perspective projection of objects in different 3D poposes. In contrast, our
method can infer the real-world sizes of features and can establish feature correspondences at their
true physical scale. [17] proposed a way to use depth estimation for real-time obstacle detection
from a monocular video stream in a vehicle navigation scenario. Their method estimates scene
depth from the scaling of supervised image regions and generates obstacle hypotheses from these
depth estimates.
5
Experiments
In the experiments we show how to improve performance of visual object classifiers by leveraging
richer sensor modalities deployed at test time. We analyze how the different proposed means of
putting visual recognition in metric context improves detection performance.
5.1
Data
For training we explore the camera-based metadata scheme described above, where we derive the
metric pixel size from EXIF data. We downloaded 38 images of 10 object categories taken with
a consumer grade dSLR that stores relevant EXIF fields (e.g., Nikon D90). For test data we have
collected 34 scenes in our laboratory of varying complexity containing 120 object instances in offices
and and a kitchen. Considerable levels of clutter, lighting and occlusion are present in the test set.
Stereo depth observations using a calibrated camera rig are obtained with test imagery, providing an
estimate of the 3-D depth of each feature point at test time.
6
Figure 4: Example detections.
7
object
bike helmet
body wash
juice
kleenex
mug
pasta
phone
pringles
toothpaste
vitamins
average
baseline
89.0
3.3
76.0
60.0
0.0
36.3
80.0
45.8
20.0
0.0
41.0
2.1D
99.1
80.0
100.0
76.53
24.63
65.6
65.7
94.3
100.0
60.0
76.59
Table 1: Average precision for several categories for baseline 2-D branch and bound search and our
2.1D method.
5.2
Evaluation
We start with a baseline, which uses the plain branch and bound detection scheme and 2D features.
We then experiment with augment the representation to 2.1D, adding 3D location to the interest
points, as well as employing the metric size constraint.
Table 1 shows the average precision for each category for baseline 2-D branch and bound search
and our 2.1D method. Adding the metric object constraints (second column) improves the results
significantly. As illustrated in Figure 4, our 2.1D representation allows grouping in 3-D and provides
improved occlusion handling. We see that the baseline branch-and-bound performs poorly on this
data set and is not capable of localizing two of the items at all. For the training data available
for these categories the local evidence was apparently not strong enough to support this detection
scheme, but with size constraints performance improved significantly.
6
Conclusion
Progress on large scale systems for visual categorization has been driven by the abundance of training data available from the web. Much richer and potentially more discriminative measurements can
be acquired and leveraged by additional sensor modalities, e.g. 3D measurements from stereo or
lidar, typically found on contemporary robotic platforms, but there is rarely sufficient training data
to learn robust models using these sensors. In order to reconcile these two trends, we developed a
method for appearance-based visual recognition in metric context, exploiting camera-based metadata to obtain size information regarding a category and local feature models that can be exploited
using 3-D sensors at test time.
We believe that ?size matters?, and that the most informative and robust aspect of 3-D information is
dimensional. We augmented local feature-based visual models with a ?2.1D? object representation
by introducing the notion of a metric patch size. Scene context from 3-D sensing and category-level
dimension estimates provide additional cues to limit search. We presented a fast, multi-class detection scheme based on a metric branch-and-bound formulation. While our method was demonstrated
only on simple 2-D SURF features, we belive these methods will be applicable as well to multikernel schemes with additional feature modalities, as well as object level desriptors (e.g., HOG,
LatentSVM).
Acknowledgements. This work was supported in part by TOYOTA and a Feodor Lynen Fellowship granted by the Alexander von Humboldt Foundation.
8
References
[1] O. Boiman, E. Shechtman, and M. Irani, In defense of Nearest-Neighbor based image classification, In Proceedings of Computer Vision and Pattern Recognition, 2008.
[2] C. Wu, B. Clipp, X. Li, J.-M. Frahm, and M. Pollefeys, 3D model matching with ViewpointInvariant Patches (VIP), In Proceedings of Computer Vision and Pattern Recognition, 2008.
[3] P. Scovanner, S. Ali, M. Shah, A 3-dimensional SIFT descriptor and its application to action
recognition, In Proceedings of the 15th international conference on Multimedia, 2007.
[4] M. Kortgen, G. J. Park, M. Novotni, R. Klein, 3D Shape Matching with 3D Shape Contexts, In
the 7th Central European Seminar on Computer Graphics, 2003.
[5] A. Frome, D. Huber, R. Kolluri, T. Bulow, and J. Malik. Recognizing objects in range data using
regional point descriptors, In Proceedings of the 8th European Conference on Computer Vision,
2004.
[6] A. Oliva, and A. Torralba, Building the Gist of a Scene: The Role of Global Image Features in
Recognition, In Visual Perception, Progress in Brain Research, vol 155, 2006.
[7] D. Hoiem, A. Efros, M. Hebert, Geometric Context from a Single Image, In Proceedings of the
Tenth IEEE International Conference on Computer Vision, 2005.
[8] T. Darrell and K. Wohn, Pyramid based depth from focus, In Proceedings of Computer Vision
and Pattern Recognition, 1988.
[9] D. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, International Journal of
Computer Vision, 2004.
[10] J. Matas, O. Chum, and M. Urban, and T. Pajdla, Robust wide baseline stereo from maximally
stable extremal regions. In British Machine Vision Conference, 2002.
[11] A. Saxena, M. Sun, A. Y. Ng, Make3D: Learning 3-D Scene Structure from a Single Still
Image, In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2008.
[12] A. Levin, R. Fergus, F. Durand, Fr?edo, and W.T. Freeman, Image and depth from a conventional camera with a coded aperture, ACM Transactions on Graphics, 2007.
[13] Bastian Leibe and Ales Leonardis and Bernt Schiele, Combined Object Categorization and
Segmentation With An Implicit Shape Model In ECCV workshop on statistical learning in computer vision, 2004
[14] Krystian Mikolajczyk and Cordelia Schmid, A Performance Evaluation of Local Descriptors,
In PAMI, 2005.
[15] R. Ewerth, M. Schwalb, Martin, and B. Freisleben, Using depth features to retrieve monocular video shots, In Proceedings of the 6th ACM international conference on image and video
retrieval, 2007.
[16] E. Sudderth, A. Torralba, W. T. Freeman, and A. Wilsky, Depth from Familiar Objects: A
Hierarchical Model for 3D Scenes, In Proceedings of Computer Vision and Pattern Recognition,
2006.
[17] A. Wedel, U. Franke, J. Klappstein, T. Brox, and D. Cremers, Realtime Depth Estimation and
Obstacle Detection from Monocular Video, DAGM-Symposium, 2006.
[18] Junsong Yuan, Zicheng Liu and Ying Wu, Discriminative Subvolume Search for Efficient Action Detection, In Proceedings of Computer Vision and Pattern Recognition, 2009.
[19] Christoph H. Lampert and Matthew B. Blaschko and Thomas Hofmann, Efficient Subwindow
Search: A Branch and Bound Framework for Object Localization, In Transactions on Pattern
Analysis and Machine Intelligence (PAMI), 2009.
[20] Tom Yeh, John Lee and Trevor Darrell, Fast Concurrent Object Localization and Recognition,
In CVPR 2009.
[21] Junsong Yuan and Zicheng Liu and Ying Wu, Discriminative Subvolume Search for Efficient
Action Detection, In CVPR 2009.
9
| 4104 |@word exploitation:1 trigonometry:1 shot:1 shechtman:1 liu:2 contains:2 score:4 hoiem:1 kleenex:1 existing:1 com:2 z2:1 yet:1 wherefore:1 readily:1 john:1 happen:1 informative:1 hofmann:1 shape:6 designed:1 gist:1 depict:1 v:1 alone:1 cue:3 leaf:1 device:2 item:1 intelligence:2 core:1 record:1 provides:2 node:8 location:4 codebook:4 along:1 direct:1 become:1 symposium:1 yuan:2 combine:1 parallax:1 acquired:2 huber:1 nor:1 multi:6 grade:2 occluder:1 brain:1 freeman:2 automatically:1 actual:1 window:2 increasing:1 provided:4 discover:1 estimating:1 confused:1 moreover:1 factorized:1 matched:2 bike:1 what:1 multikernel:1 blaschko:1 developed:2 berkeley:3 saxena:1 interactive:1 scaled:1 classifier:1 gloh:2 slr:1 branchand:1 local:27 treat:1 limit:1 establishing:1 approximately:1 probablistic:1 might:1 pami:3 emphasis:1 specifying:1 christoph:1 limited:2 patterned:1 range:3 practical:1 camera:18 lost:2 empirical:1 significantly:4 gabor:1 projection:1 matching:2 word:6 subvolume:2 get:1 cannot:1 layered:1 put:1 context:8 franke:1 conventional:2 imposed:1 map:1 mser:1 center:3 demonstrated:1 resolution:1 simplicity:2 recovery:1 amazon:1 explictly:1 importantly:1 dominate:1 retrieve:1 notion:1 autonomous:1 today:1 us:3 hypothesis:2 trend:2 recognition:22 utilized:1 database:2 observed:2 role:1 capture:1 wj:3 region:6 rig:2 sun:1 highest:1 contemporary:1 icsi:3 complexity:1 schiele:1 ali:1 purely:1 distinctive:2 localization:2 easily:1 various:1 train:1 fast:2 describe:1 fsdr:1 detected:1 labeling:1 apparent:1 encoded:1 richer:2 bernt:1 cvpr:2 otherwise:1 ability:1 statistic:2 online:4 sequence:1 reconstruction:2 propose:3 fr:1 inserting:1 relevant:1 poorly:1 exploiting:4 parent:1 cluster:1 darrell:3 produce:1 categorization:2 object:44 help:1 derive:3 develop:1 nearest:6 progress:3 make3d:1 strong:1 frome:1 indicate:2 attribute:1 filter:1 modifying:1 enable:1 humboldt:1 require:1 crux:1 assign:1 generalization:1 extension:1 scanner:1 ground:1 mapping:1 predict:1 matthew:1 efros:1 adopt:2 early:1 torralba:2 estimation:4 applicable:2 bag:2 visited:1 helmet:1 extremal:1 bridge:1 concurrent:1 sensor:13 gaussian:1 mobile:2 varying:2 office:2 derived:4 focus:9 likelihood:2 indicates:1 contrast:3 baseline:6 detect:1 inference:2 dependent:1 downsample:1 dagm:1 typically:2 relation:1 interested:1 pixel:4 overall:1 arg:1 orientation:3 classification:1 augment:2 constrained:1 spatial:1 platform:1 uc:3 brox:1 field:4 once:1 cordelia:1 ng:1 sampling:1 represents:1 park:1 look:2 anticipated:1 few:3 employ:1 modern:2 recognize:1 latentsvm:1 individual:1 familiar:1 kitchen:1 geometry:3 occlusion:2 detection:14 interest:2 highly:2 workable:1 evaluation:2 introduces:1 navigation:2 circumscribing:1 accurate:1 edge:2 capable:1 tree:2 d90:1 instance:4 column:1 obstacle:3 jpeg:2 localizing:1 zicheng:2 introducing:1 recognizing:2 levin:1 graphic:2 stored:3 eec:3 calibrated:1 combined:1 fritz:1 density:4 international:4 lee:1 enhance:1 together:1 parzen:1 quickly:1 imagery:2 von:1 central:1 containing:1 leveraged:1 book:1 li:1 bold:1 availability:1 includes:1 matter:2 kate:1 cremers:1 explicitly:3 stream:1 performed:2 view:1 vehicle:3 lowe:2 mario:1 analyze:1 apparently:1 start:1 recover:1 bayes:3 contribution:1 accuracy:2 descriptor:9 characteristic:2 efficiently:1 boiman:1 identify:1 lighting:1 presume:1 apple:2 unaffected:1 finer:1 detector:1 simultaneous:1 flickr:2 manual:1 trevor:2 dslr:1 edo:1 evaluates:1 associated:4 knowledge:1 improves:2 intergrated:1 segmentation:2 actually:1 supervised:2 planar:1 response:1 improved:2 maximally:1 tom:1 formulation:1 box:19 though:1 implicit:1 until:1 horizontal:1 web:1 lack:1 defines:1 quality:1 believe:1 building:1 contain:2 y2:2 normalized:1 verify:1 true:1 read:1 irani:1 laboratory:1 illustrated:1 deal:2 mug:1 width:1 feodor:1 performs:1 motion:2 passive:1 image:37 novel:1 recently:1 common:1 juice:1 physical:5 volume:1 extend:3 approximates:1 measurement:5 imposing:1 grid:1 ebay:1 focal:1 robot:1 stable:3 recognizers:1 surface:1 etc:1 recent:3 retrieved:1 perspective:1 driven:2 phone:2 scenario:1 keyboard:2 store:3 meta:1 durand:1 exploited:1 scoring:2 captured:4 minimum:1 additional:4 somewhat:1 prune:1 paradigm:2 ale:1 branch:11 full:3 multiple:3 keypoints:1 infer:4 uncluttered:1 match:1 vitamin:1 long:1 retrieval:3 post:1 coded:2 qi:1 regression:1 basic:1 oliva:1 vision:11 metric:22 histogram:2 kernel:2 pyramid:1 proposal:1 addition:1 background:1 fellowship:1 sudderth:1 source:5 crucial:1 modality:3 operate:1 toothpaste:1 regional:1 wilsky:1 file:2 subject:1 leveraging:1 extracting:2 presence:1 leverage:1 granularity:1 split:3 maxb:1 enough:1 variety:2 independence:1 bulow:1 codebooks:1 intrinsics:3 simplifies:1 idea:1 regarding:1 whether:1 defense:1 granted:1 stereo:5 action:3 generally:1 detailed:1 amount:1 nonparametric:1 clutter:1 locally:1 situated:2 ten:1 hardware:1 augments:1 category:20 fish:1 chum:1 junsong:2 exif:8 estimated:4 klein:1 pollefeys:1 vol:1 key:1 putting:1 threshold:1 urban:1 nikon:1 tenth:1 rectangle:3 vast:1 year:1 sum:1 realworld:1 wu:3 patch:11 realtime:1 scaling:1 bound:17 internet:2 distinguish:1 correspondence:2 bastian:1 constraint:9 constrain:1 scene:16 x2:2 generates:1 aspect:2 pruned:1 relatively:1 format:3 martin:1 poor:1 across:1 describes:2 reconstructing:1 projecting:1 invariant:6 taken:1 monocular:11 previously:2 vip:2 available:12 leibe:1 hierarchical:1 shah:1 thomas:1 top:1 include:2 xc:1 calculating:1 paucity:1 exploit:1 establish:1 objective:1 malik:1 added:1 quantity:3 already:1 matas:1 parametric:1 traditional:4 exhibit:1 gradient:1 distance:4 mapped:2 majority:1 collected:2 consumer:2 assuming:2 length:1 code:2 modeled:2 illustration:1 ratio:2 providing:1 ying:2 nc:3 difficult:1 potentially:1 hog:1 pajdla:1 reliably:2 perform:1 upper:2 observation:3 markov:1 merchant:1 y1:2 intensity:1 inferred:1 required:1 z1:1 learned:2 established:1 leonardis:1 pattern:9 perception:1 indoor:2 regime:1 video:8 rely:1 scheme:14 improve:3 eye:2 identifies:1 irrespective:1 metadata:11 extract:2 naive:3 schmid:1 text:1 literature:1 discovery:1 acknowledgement:1 geometric:1 yeh:1 relative:1 embedded:1 digital:2 foundation:1 downloaded:2 degree:1 sufficient:1 consistent:3 eccv:1 skeptical:1 dinosaur:1 supported:1 hebert:1 neighbor:5 wide:2 absolute:9 depth:27 dimension:3 world:4 valid:4 rich:2 plain:1 mikolajczyk:1 commonly:2 subwindow:1 employing:1 transaction:3 approximate:1 implicitly:1 aperture:3 wedel:1 global:3 robotic:2 active:1 infered:1 severly:1 discriminative:6 xi:12 fergus:1 search:22 kolluri:1 table:2 learn:2 robust:5 expanding:1 obtaining:1 improving:1 defocus:1 pasta:1 european:2 surf:3 dense:1 bounding:20 reconcile:2 lampert:1 child:1 x1:2 body:1 augmented:1 frahm:1 fashion:2 deployed:1 embeds:1 precision:2 sub:1 theme:1 position:1 explicit:3 seminar:1 outdoor:2 toyota:1 abundance:1 british:1 specific:2 sift:4 sensing:5 symbol:1 keypad:1 svm:1 evidence:1 grouping:1 intrinsic:1 workshop:1 quantization:1 adding:2 supplement:1 wash:1 gap:1 simply:1 appearance:4 explore:1 visual:18 contained:1 truth:1 determines:2 extracted:4 acm:2 goal:1 formulated:1 towards:1 krystian:1 considerable:1 lidar:2 typical:1 except:1 lens:1 multimedia:1 invariance:1 vote:1 saenko:1 rarely:1 pinhole:1 support:1 scan:2 alexander:1 handling:1 |
3,429 | 4,105 | Simultaneous Object Detection and Ranking with
Weak Supervision
Matthew B. Blaschko
Andrea Vedaldi
Andrew Zisserman
Department of Engineering Science
University of Oxford
United Kingdom
Abstract
A standard approach to learning object category detectors is to provide strong supervision in the form of a region of interest (ROI) specifying each instance of
the object in the training images [17]. In this work are goal is to learn from heterogeneous labels, in which some images are only weakly supervised, specifying
only the presence or absence of the object or a weak indication of object location,
whilst others are fully annotated.
To this end we develop a discriminative learning approach and make two contributions: (i) we propose a structured output formulation for weakly annotated images
where full annotations are treated as latent variables; and (ii) we propose to optimize a ranking objective function, allowing our method to more effectively use
negatively labeled images to improve detection average precision performance.
The method is demonstrated on the benchmark INRIA pedestrian detection dataset
of Dalal and Triggs [14] and the PASCAL VOC dataset [17], and it is shown that
for a significant proportion of weakly supervised images the performance achieved
is very similar to the fully supervised (state of the art) results.
1
Introduction
Learning from weakly annotated data is a long standing goal for the practical application of machine learning techniques to real world data. Expensive manual labeling steps should be avoided if
possible, while weakly labeled and unlabeled data sources should be exploited in order to improve
performance with little to no additional cost. In this work, we propose a unified framework for
learning to detect objects in images from data with heterogeneous labels. In particular, we consider
the case of image collections for which we would like to predict bounding box localizations, but that
(for a significant proportion of the training data) only image level binary annotations are provided
indicating the presence or absence of an object, or that weak indications of object location are given
without a precise bounding box annotation.
We approach this task from the perspective of structured output learning [3, 35, 36], building on the
approach of Blaschko and Lampert [8], in which a structured output support vector machine formulation [36] is used to directly learn a regressor from images to object localizations parameterized
by the coordinates of a bounding box. We extend this framework here to weakly annotated images
by treating missing information in a latent variable fashion following [2, 40]. Available annotation,
such as the presence or absence of an object in an image, constrains the set of values the latent variable can take. In the case that complete label information is provided [40] reduces to [36], giving
a unified framework for data with heterogeneous levels of annotation. We empirically observe that
the localization approach of [8] fails in the case that there are many images with no object present,
motivating a slight modification of the learning algorithm to optimize detection ranking analogous
1
to [11, 21, 41]. We extend these works to the case that the predictions to be ranked are structured
outputs. When combined with discriminative latent variable learning, this results in an algorithm
similar to multiple instance ranking [6], but we exploit the full generality of structured output learning.
The computer vision literature has approached learning from weakly annotated data in many different ways. Search engine results [20] or associated text captions [5, 7, 13, 34] are attractive due to
the availability of millions of tagged or captioned images on the internet, providing a weak form of
labels beyond unsupervised learning [37]. This generally leads to ambiguity as captions tend to be
correlated with image content, but may contain errors. Alternatively, one may approach the problem
of object detection by considering generic properties of objects or their attributes in order to combine training data from multiple classes [1, 26, 18]. Deselaers et al. learn the common appearance
of multiple object categories, which yields an estimate of where in an image an object is without
specifying the specific class to which it belongs [15]. This can then be utilized in a weak supervision
setting to learn a detector for a specific object category. Carbonetto et al. consider a Bayesian framework for learning across incomplete, noisy, segmentation-level annotation [10]. Structured output
learning with latent variables has been proposed for inferring partial truncation of detections due to
occlusion or image boundaries [38]. Image level binary labels have often been used, as this generally
takes less time for a human annotator to produce [4, 12, 23, 28, 30, 31, 33]. Here, we consider this
latter kind of weak annotation, and will also consider cases where the object center is constrained to
a region in the image, but that exact coordinates are not given [27]. Simultaneous localization and
classification using a discriminative latent variable model has been recently explored in [29], but
that work has not considered mixed annotation, or a structured output loss.
The rest of this paper is structured as follows. In Section 2 we review a structured output learning
formulation for object detection that will form the basis of our optimization. We then propose to
improve that approach to better handle negative training instances by developing a ranking objective
in Section 3. The resulting objective allows us to approach the problem of weakly annotated data in
Section 4, and the methods are empirically validated in Section 5.
2
Object Detection with Structured Output Learning
Structured output learning generalizes traditional learning settings to the prediction of more complex
output spaces, in which there may be non-trivial interdependencies between components of the
output. In our case, we would like to learn a mapping f : X ? YSwhere X the space of images and
Y is the space of bounding boxes or no bounding box: Y ? ? (l, t, r, b), where (l, t, r, b) ? R4
specifies the left, top, right, and bottom coordinates of a bounding box. This approach was first
proposed by [8] using the Structured Output SVM formulation of [36]:
1
1X
min
kwk2 + C
?i
(1)
w,?
2
n i
s.t.
hw, ?(xi , yi )i ? hw, ?(xi , y)i ? ?(yi , y) ? ?i ,
?i ? 0 ?i
?i, y ? Y \ {yi }
(2)
(3)
where ?(yi , y) is a loss for predicting y when the true output is yi , and ?(xi , yi ) is a joint kernel
map that measures statistics of the image, xi , local to the bounding box, yi [8, 9].1 Training is
achieved using delayed constraint generation, and at test time, a prediction is made by computing
f (x) = argmaxy hw, ?(x, y)i.
It was proposed in [8] to treat images in which there is no instance of the object of interest as zero
vectors in the Hilbert space induced by ?, i.e. ?(x, y? ) = 0 ?x where y? indicates the label that
there is no object in the image (i.e. y? ? ?). During training, constraints are generated by finding
y?i? = argmaxy?Y\{yi } hw, ?(xi , y)i+?(yi , y). For negative images, ?(y? , y) = 1 if y indicates an
object is present, so the maximization corresponds simply to finding the bounding box with highest
score. The resulting constraint corresponds to:
?i ? 1 + hw, ?(xi , y?i? )i
(4)
1
As in [8], we make use of the margin rescaling formulation of structured output learning. The slack
rescaling variant is equally applicable [36].
2
which tends to decrease the score associated with all bounding boxes in the image. The primary
problem with this approach is that it optimizes a regularized risk functional for which negative
images are treated equally with positive images. In the case of imbalances in the training data
where a large majority of images
P do not contain the object of interest, the objective function may
be dominated by the terms in i ?i for which there is no bounding box present. The learning
procedure may focus on decreasing the score of candidate detections in negative images rather than
on increasing the score of correct detections. We show empirically in Section 5 that this treatment
of negative images is in fact detrimental to localization performance. The results presented in [8]
were achieved by training only on images with an instance of the object present, ignoring large
quantities of negative training data. Although one may attempt to address this problem by adjusting
the loss function, ?, to penalize negative images less than positive images, this approach is heuristic
and requires searching over an additional parameter during training (the relative size of the loss
for negative images). We address this imbalance more elegantly without introducing additional
parameters in the following section.
3
Learning to Rank
We propose to remedy the shortcomings outlined in the previous section by modifying the objective
in Equation (1) to simultaneously localize and rank object detections. The following constraints
applied to the test set ensure a perfect ranking, that is that every true detection has a higher score
than all false detections:
hw, ?(xi , yi )i > hw, ?(xj , y?j )i
?i, j, y?j ? Y \ {yj }.
(5)
We modify these constraints, incorporating a structured output loss, in the following structured
output ranking objective
1
1 X
min
kwk2 + C
?ij
(6)
w,?
2
n ? n+ i,j
hw, ?(xi , yi )i ? hw, ?(xj , y?j )i ? ?(yj , y?j ) ? ?ij
?ij ? 0 ?i, j
s.t.
?i, j, y?j ? Y \ {yj }
(7)
(8)
where n+ denotes the number of positive instances in the training set. As compared with Equations (1)-(3), we now compare each positive instance to all bounding boxes in all images in the
training set instead of just the bounding boxes from the image it comes from. The constraints attempt to give all positive instances a score higher than all negative instances, where the size of
the margin is scaled to be proportional to the loss achieved by the negative instance. We note that
one can use this same approach to optimize related ranking objectives, such as precision at a given
detection rate, by extending the formulations of [11, 41] to incorporate our structured output loss
function, ?.
As in [8, 36] we have an intractable number of constraints in Equation (7). We will address this
problem using a constraint generation approach with a 1-slack formulation
min
w,?
s.t.
1
kwk2 + C?
2
X
X
hw, ?(xi , yi )i ? hw, ?(xj , y?j )i ?
?(yj , y?j ) ? ?
ij
ij
??0
(9)
??
y?
M
Y \ {yj } (10)
j
(11)
where y
? is a vector with jth element y?j . Although this results in a number of constraints exponential
in the number of training examples, we can solve this efficiently using a cutting plane algorithm. The
proof of equivalence between this optimization problem and that in Equations (6)-(8) is analogous
to the proof in [22, Theorem 1]. We are only left to find the maximally violated constraints in
Equation (10). Algorithm 1 gives an efficient procedure for doing so.
Algorithm 1 works by first scoring all positive regions, as well as finding and scoring the maximally violated regions from each image. We make use of the transitivity of ordering these two sets
of scores to avoid comparing all pairs in a na??ve fashion. If hw, ?(xj , y?j? )i ? hw, ?(xi , yi )i and
3
Algorithm 1 1-slack structured output ranking ? maximally violated constraint.
Ensure: Maximally violated constraint is ? ? hw, ?i ? ?
for all i do
s+
i = hw, ?(xi , yi )i
end for
for all j do
y?j? = argmaxy hw, ?(xj , y)i + ?(yj , y)
s?
?j? )i + ?(yj , y?j? )
j = hw, ?(xj , y
end for
(s+ , p+ ) = sort(s+ ) {p+ is a vector of indices specifying a given score?s original index.}
(s? , p? ) = sort(s? )
? = 0, k = 1, ? = ?+ = 0
for all j do
+
while s?
1 do
j > sk ?k ? n+ +
?+ = ?+ + ? xp+ , yp+
k
k
k =k+1
end while
? = ? + ?+ ? (k ? 1)? xp? , y?p??
j
? = ? + (k ?
end for
j
1)?(yj , y?j? )
hw, ?(xi , yi )i ? hw, ?(xp , yp )i, we do not have to compare hw, ?(xj , y?j? )i and hw, ?(xp , yp )i. Instead, we sort the instances of the class by their score, and sort the negative instances by their score
as well. We keep an accumulator vector for positive images, ?+ , and a count of the number of
violated constraints (k ? 1). We iterate through each violated region, ordered by score, and sum the
violated constraints into ? and ?, yielding the maximally violated 1-slack constraint.
4
Weakly Supervised Data
Now that we have developed a structured output learning framework that is capable of appropriately
handling images from the background class, we turn our attention to the problem of learning with
weakly annotated data. We will consider the problem in full generality by assuming that we have
bounding box level annotation for some training images, but only binary labels or weak location
information for others. For negatively labeled images, we know that no bounding box in the entire
image contains an instance of the object class, while for positive images at least one bounding box
belongs to the class of interest. We approach this issue by considering the location of a bounding
box to be a latent variable to be inferred during training. The value that this variable can take is
constrained by the weak annotation. In the case that we have only a binary image-level label, we
constrain the latent variable to indicate that some region of the image corresponds to the object of
interest. In a more constrained case, such as annotation indicating the object center, we constrain
the latent variable to belong to the set of bounding boxes that have a center consistent with the annotation. There is an asymmetry in the image level labeling in that negative labels can be considered
to be full annotation (i.e. all bounding boxes do not contain an instance of the object), while positive labels are incomplete.2 We consider the index variable j to range over all completely labeled
images, including negative images.
We consider a modification of the constrained objective developed in the previous section to include
constraints of the form given in Equation (7), but also constraints for our weakly annotated positive
images, which we index by m,
max hw, ?(xm , y?m )i ? hw, ?(xj , y?j )i ? ?(yj , y?j ) ? ?mj ?m, j, y?j ? Y \ {yj }, (12)
y?m ?Ym
2
Note that this is exactly the asymmetry discussed in [2] in the context of multiple instance learning. Our
setting can be seen as a generalization to mixed annotations.
4
where Ym is the set of bounding boxes consistent with the weak annotation for image m. Due to the
maximization over y?m , the optimization is no longer convex, but we can find a local optimum using
the CCCP algorithm [40]. This is effectively equivalent to the case of loss-rescaled multiple instance
learning, and we note that the resulting objective has similarities to that of [2]. Viewed another way,
we treat the location of the hypothesized bounding box as a latent variable. In order to use this in our
discriminative optimization, we will try to put a large margin between the maximally scoring box
and all bounding boxes with high loss. Though our algorithm does not have direct information about
the true location of the object of interest, it tries to learn a discriminant function that can distinguish
a region in the positively labeled images from all regions in the negatively labeled images.
5
Results
We validate our model on the benchmark INRIA pedestrian detection dataset of Dalal and
Triggs [14] using a histogram of oriented gradients (HOG) representation, and the PASCAL VOC
dataset [16, 17]. Following [9, 24, 25], we provide detailed results on the cat class as the high variation in pose is appropriate for testing a bag of words model, but also provide summary results for all
classes in the form of improvement in mean average precision (mean AP). We first illustrate the performance of the ranking objective developed in Section 3 and subsequently show the performance
of learning with weakly supervised data using the latent variable approach of Section 4.
5.1
Experimental Setup
We have implemented variants of two popular object detection systems in order to show the generalization of the approaches developed in this work to different levels of supervision and feature
descriptors. In the first variant, we have used a linear bag of words model similar to that developed
in [8, 24, 25]. Inference of maximally violated constraints and object detection was performed using
Efficient Subwindow Search (ESS) branch-and-bound inference [24, 25]. The joint kernel map, ?,
was constructed using a concatenation of the bounding box visual words histogram (the restriction
kernel) and a global image histogram, similar to the approach described in [9]. Results are presented
on the VOC 2007 dataset [16, 17].
The second variant of the detector is based on the histogram of oriented gradients (HOG) representation [14]. HOG subdivides the image into cells, usually of size 8 ? 8 pixels, and computes for
each cell a weighed histogram of the gradient orientations. The experiments use the HOG variant
of [19], which results in a 31-dimensional histogram for each cell. The HOG features are extracted
at multiple scales, forming a pyramid. An object is described by a rectangular arrangement of HOG
cells (the aspect ratio of the rectangular grouping is fixed). The joint feature map, ?, extracts from
the HOG representation of the image the rectangular group of HOG cells at a given scale and location [38]. A constant bias term is appended to the resulting feature [38] for all but the ranking cost
functional, as the bias term cancels out in that formulation. Note that the model is analogous to the
HOG detector of [14], and in particular does not use flexible parts as in [19]. Results are presented
for the INRIA pedestrian data set [14].
5.2
Learning to Rank
In order to evaluate the effects of optimizing the ranking objective developed in this work, we begin
by comparing the performance of the objective in Equations (6)-(8) in a fully supervised setting with
that of the objective in Equations (1)-(3), which correspond to the optimization proposed in [8].
In Figure 1, we show the relative performance of the linear bag of visual words model applied to the
PASCAL VOC 2007 data set [17]. We first show results for the cat class in which 10% of negative
images are included in the training set (Figure 1(a)), and subsequently results for which all negative
images are used for training (Figure 1(b)). While the ranking objective can appropriately handle
varying amounts of negative training data, the objective in Equation (1) fails, resulting in worse
performance as the amount of negative training data increases. These results empirically show the
shortcomings of the treatment of negative images proposed in [8], but the ranking objective by
contrast is robust to large imbalances between positive and negative images. Mean AP increases by
69% as a result of using the ranking objective when 10% of negative images are included during
training, and mean AP improves by 71% when all negative images are used.
5
0.4
0.4
Ranking objective
Standard objective
Ranking objective
Standard objective
0.35
0.3
0.3
0.25
0.25
recall
recall
0.35
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
0
0.05
0.1
0.15
precision
0.2
0.25
0.3
0
0
0.05
0.1
0.15
precision
0.2
0.25
0.3
(a) cat class trained with 10% of available (b) cat class trained with 100% of availnegative images.
able negative images.
Figure 1: Precision-recall curves for the structured output ranking objective proposed in this paper
(blue) vs. the structured output objective proposed in [8] (red) for varying amounts of negative
training data. Results are shown on the cat class from the PASCAL VOC 2007 data set for 10%
of negative images (1(a)) and for 100% of negatives (1(b)). In all cases a linear bag of visual words
model was employed (see text for details). The structured output objective proposed in [8] performs
worse with increasing amounts of negative training data, and the algorithm completely fails in 1(b).
The ranking objective, on the other hand, does not suffer from this shortcoming (blue curves).
Figure 2.(a) analyzes the performance of the HOG pedestrian detection on the INRIA data set.
Three cost functionals are compared: a simple binary SVM, the structural SVM model of (1), and
the ranking SVM model of (6). The INRIA dataset contains 1218 negative images (i.e. images not
containing people). Each image is subdivided (in scale and space) into twenty sub-images and a
maximally violating window (object location) is extracted from each of those. This results in 24360
negative windows. The dataset contains also 612 positive images, for a total of 1237 labeled pedestrians. Thus there are about twenty times more negative examples than positive ones. Reweighted
versions of the binary and structural SVM models that balance the number of positive and negative
examples are also tested. As the figure shows, balancing the data in the cost functional is important,
especially for the binary SVM model; the ranking model is slightly superior to the other formulations, with average precision of 77%, and does not require an adjustment to the loss to account for
a given level of data imbalance. By comparison, the state-of-the-art detector of [32] has average
precision 78%. We conjecture that this small difference in performance is due to their use of color
information.
5.3
Learning with Weak Annotations
To evaluate the objective in the case of weak supervision, we have additionally performed experiments in which we have varied the percentage of bounding box annotations provided to the learning
algorithm.
Figure 3 contrasts the performance on the VOC dataset of our proposed discriminative latent variable algorithm with that of a fully supervised algorithm in which weakly annotated training data are
ignored. We have run the algorithm for 10% of images having full bounding box annotations (with
the other 90% weakly labeled) and for 50% of images having complete annotation. In the fully supervised case, we ignore all images that do not have full bounding box annotation and train the fully
supervised ranking objective developed in Section 3. In all cases, the latent variable model performs
convincingly better than subsampling. For 10% of images fully annotated, mean AP increases by
64%, and with 50% of images fully annotated, mean AP increases by 83%.
Figure 2.(b) reports the performance of the latent variable ranking model (8) for the HOG-based detector on the INRIA pedestrian dataset. Only one positive image is fully labeled with the pedestrian
bounding boxes while the remaining positive images are weakly labeled. Since most positive images
contain multiple pedestrians, the weak annotations carry a minimal amount of information that is
still sufficient to distinguish the different pedestrian instances. Specifically, the bounding boxes are
discarded and only their centers are kept. Estimating the latent variables consists of a search over
6
1
0.8
0.8
0.6
0.6
precision
precision
1
0.4
73.97% (structural)
59.51% (binaray)
76.23% (structural bal.)
75.85% (binary bal.)
77.33% (rank)
0.2
0
0
0.2
0.4
0.4
31.89% (no weak)
50.83% (50 weak)
54.30% (100 weak)
59.68% (200 weak)
66.10% (500 weak)
75.35% (all weak)
0.2
0.6
0.8
0
1
(a)
recall
0
0.2
0.4
0.6
0.8
1
recall
Figure 2: (a) Precision-recall curves for different formulations: binary and structural SVMs, balanced binary and structural SVMs, ranking SVM. The unbalanced SVMs, and in particular the
binary one, do not work well due to the large number of negative examples compared to the positive
ones. The ranking formulation is slightly better than the other balanced costs for this dataset. (b)
Precision-recall curves for increasing amounts of weakly supervised data for the ranking formulation. For all curves, only one image is fully labeled with bounding boxes around pedestrians, while
the other images are labeled only by the pedestrian centers. The first curve (AP 32%) corresponds
to the case in which only the fully supervised image is used; the last curve (AP 75%) to the case in
which all the other training images are added with weak annotations. The performance is almost as
good as the fully supervised case (AP 77%) of (a).
1
0.45
Weak supervision
Subsampling
Weak supervision
Subsampling
0.9
0.4
0.8
0.35
0.7
0.3
0.25
recall
recall
0.6
0.5
0.2
0.4
0.15
0.3
0.1
0.2
0.05
0.1
0
0
0.05
0.1
0.15
0.2
0.25
precision
0.3
0.35
0.4
0.45
0
0
0.05
0.1
0.15
0.2
0.25
precision
0.3
0.35
0.4
0.45
(a) cat class trained with 10% of bounding (b) cat class trained with 50% of bounding
boxes.
boxes.
Figure 3: Precision-recall curves for the structured output ranking objective proposed in this paper
trained with a linear bag of words image representation and weak supervision (blue) vs. only using
fully labeled samples (red). Results are shown for 10% of bounding boxes (left) and for 50% of
bounding boxes (right), the remainder of the images were provided with weak annotation indicating
the presence or absence of an object in the image, but not the object location. In both cases, the
latent variable model (blue) results in performance that is substantially better than discarding weakly
annotated images and using a fully supervised setting (red).
all object locations and scales for which the corresponding bounding box center is within a given
bound of the labeled center (the bound is set to 25% of the length of the box diagonal). In other
words, a weak annotation contains only approximate location information. This gives robustness to
inaccuracies in manually labeling the centers. The figure shows how the model performs when, in
addition to the singly fully annotated image, an increasing number of weakly annotated images are
added. Starting from 32% AP, the method improves up to 75% AP, which is remarkably similar to
the best result (77% AP) obtained with full supervision.
7
(b)
6
Discussion
We can draw several conclusions from the results in Section 5. First, using the learning formulation
developed in [8], negative images are not handled properly, resulting in the undesired behavior
that additional negative images in the training data decrease performance. The special case of the
objective in Equations (1)-(3), for which no negative training data are incorporated, can be viewed
roughly as an estimate of the log probability of an object being present at a location conditioned on
that an object is present in the image. While this results in reasonable performance in terms of recall
(c.f. [8]), it does not result in a good average precision (AP) score. In fact, the results presented in [8]
were computed by training the objective function only on positive images, and then using a separate
non-linear ranking function based on global image statistics. Using only positively labeled images
in the objective presented in Section 2 only incorporates a subset of the constraints in Equation (7)
corresponding to i = j. Incorporating all these constraints directly optimizes ranking, enabling the
use of all available negative training data to improve localization performance.
Reweighting the loss corresponding to positive and negative examples resulted in similar performance to the ranking objective on the INRIA pedestrian data set, but requires a search across an
additional parameter. From the perspective of regularized risk, subsampling negative images can be
viewed as a noisy version of this reweighting, and experiments on PASCAL VOC using the objective in (1) showed poor performance over a wide range of sampling rates. The ranking objective
by contrast weights loss from the negative examples appropriately (Algorithm 1) according to their
contribution to the loss for the precision-recall curve. This is a much more principled and robust
criterion for setting the loss function.
By using the ranking objective to treat negative images, learning with weak annotations was made
directly applicable using a discriminative latent variable model. Results showed consistent improvement across different proportions of weakly and fully supervised data. Our formulation handled
different ratios of weakly annotated and fully annotated training data without additional parameter
tuning in the loss function. The discriminative latent variable approach has been able to achieve
performance within a few percent of that achieved by a fully supervised system using only one fully
supervised label. The weak labels used for the remaining data are significantly less expensive to
supply [39]. That this is consistent across the data sets reported here indicates that discriminative
latent variable models are a promising strategy for treating weak annotation in general.
Acknowledgments
The first author is supported by the Royal Academy of Engineering through a Newton International
Fellowship. The research leading to these results has received funding from the European Research
Council under the European Community?s Seventh Framework Programme (FP7/2007- 2013) / ERC
grant agreement no. 228180, and from the PASCAL2 network of excellence.
References
[1] B. Alexe, T. Deselaers, and V. Ferrari. What is an object? In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, June 2010.
[2] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning.
In Advances in Neural Information Processing Systems, pages 561?568. MIT Press, 2003.
[3] G. H. Bak?r, T. Hofmann, B. Sch?olkopf, A. J. Smola, B. Taskar, and S. V. N. Vishwanathan. Predicting
Structured Data. MIT Press, 2007.
[4] A. Bar Hillel, T. Hertz, and D. Weinshall. Efficient learning of relational object class models. In Proceedings of the International Conference on Computer Vision, pages 1762?1769, 2005.
[5] T. Berg, A. Berg, J. Edwards, M. Mair, R. White, Y. Teh, E. Learned-Miller, and D. Forsyth. Names and
Faces in the News. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
Washington, DC, 2004.
[6] C. Bergeron, J. Zaretzki, C. Breneman, and K. P. Bennett. Multiple instance ranking. In Proceedings of
the International Conference on Machine Learning, pages 48?55, 2008.
[7] M. B. Blaschko and C. H. Lampert. Correlational spectral clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[8] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In
Proceedings of the European Conference on Computer Vision, 2008.
[9] M. B. Blaschko and C. H. Lampert. Object localization with global and local context kernels. In Proceedings of the British Machine Vision Conference, 2009.
[10] P. Carbonetto, G. Dork?o, C. Schmid, H. K?uck, and N. Freitas. Learning to recognize objects with little
supervision. International Journal of Computer Vision, 77(1?3):219?237, 2008.
8
[11] O. Chapelle and S. S. Keerthi. Efficient algorithms for ranking with svms. Information Retrieval, 2009.
[12] O. Chum and A. Zisserman. An exemplar model for learning object classes. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2007.
[13] T. Cour, B. Sapp, C. Jordan, and B. Taskar. Learning from ambiguously labeled images. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, 2009.
[14] N. Dalal and B. Triggs. Histogram of Oriented Gradients for Human Detection. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 886?893, 2005.
[15] T. Deselaers, B. Alexe, and V. Ferrari. Localizing objects while learning their appearance. In Proceedings
of the European Conference on Computer Vision, 2010.
[16] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html, 2007.
[17] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object
classes (voc) challenge. International Journal of Computer Vision, 88(2):303?338, June 2010.
[18] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, pages 1778?1785, 2009.
[19] P. Felzenszwalb, D. Mcallester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[20] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from Google?s image
search. In Proceedings of the International Conference on Computer Vision, 2005.
[21] T. Joachims. Optimizing search engines using clickthrough data. In KDD ?02: Proceedings of the eighth
ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133?142, New
York, NY, USA, 2002. ACM.
[22] T. Joachims, T. Finley, and C.-N. J. Yu. Cutting-plane training of structural svms. Machine Learning,
77(1):27?59, 2009.
[23] G. Kim and A. Torralba. Unsupervised detection of regions of interest using iterative link analysis. In
Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural
Information Processing Systems, pages 961?969. 2009.
[24] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Beyond sliding windows: Object localization by efficient subwindow search. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[25] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound
framework for object localization. IEEE Transactions on Pattern Analysis and Machine Intelligence,
2009.
[26] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 951?958, 2009.
[27] B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit
shape model. In Workshop on Statistical Learning in Computer Vision, ECCV, May 2004.
[28] F. Moosmann, D. Larlus, and F. Jurie. Learning saliency maps for object categorization. In ECCV
International Workshop on The Representation and Use of Prior Knowledge in Vision, 2006.
[29] M. H. Nguyen, L. Torresani, F. De la Torre Frade, and C. Rother. Weakly supervised discriminative
localization and classification: A joint learning process. In Proceedings of the International Conference
on Computer Vision, 2009.
[30] A. Opelt, A. Fussenegger, A. Pinz, and P. Auer. Weak hypotheses and boosting for generic object detection
and recognition. In Proceedings of the 8th European Conference on Computer Vision, Prague, Czech
Republic, volume 2, pages 71?84, 2004.
[31] A. Opelt and A. Pinz. Object localization with boosting and weak supervision for generic object recognition. In Scandinavian Conference on Image Analysis, pages 862?871, 2005.
[32] P. Ott and M. Everingham. Implicit color segmentation features for pedestrian and object detection. In
Proceedings of the International Conference on Computer Vision, 2009.
[33] C. Pantofaru and M. Hebert. A framework for learning to recognize and segment object classes using
weakly supervised training data. In Proceedings of the British Machine Vision Conference, 2007.
[34] N. Rasiwasia and N. Vasconcelos. Scene classification with low-dimensional semantic spaces and weak
supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[35] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In S. Thrun, L. Saul, and
B. Sch?olkopf, editors, Advances in Neural Information Processing Systems. 2004.
[36] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In Proceedings of the International Conference on Machine
Learning, 2004.
[37] T. Tuytelaars, C. H. Lampert, M. B. Blaschko, and W. Buntine. Unsupervised object discovery: A comparison. International Journal of Computer Vision, 88(2):61?85, 2010.
[38] A. Vedaldi and A. Zisserman. Structured output regression for detection with partial truncation. In
Advances in Neural Information Processing Systems, 2009.
[39] S. Vijayanarasimhan and K. Grauman. Multi-level active prediction of useful image annotations for recognition. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information
Processing Systems, pages 1705?1712. 2009.
[40] C.-N. J. Yu and T. Joachims. Learning structural svms with latent variables. In Proceedings of the
International Conference on Machine Learning, 2009.
[41] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In Special Interest Group on Information Retrieval, 2007.
9
| 4105 |@word version:2 dalal:3 proportion:3 everingham:3 triggs:3 carry:1 contains:4 score:12 united:1 hoiem:1 freitas:1 comparing:2 kdd:1 shape:1 hofmann:5 treating:2 v:2 intelligence:1 plane:2 es:1 boosting:2 location:12 org:1 constructed:1 direct:1 supply:1 consists:1 combine:1 excellence:1 roughly:1 behavior:1 andrea:1 multi:1 voc:9 decreasing:1 little:2 window:3 considering:2 increasing:4 farhadi:1 provided:4 blaschko:8 begin:1 estimating:1 what:1 weinshall:1 kind:1 substantially:1 developed:8 whilst:1 unified:2 finding:3 every:1 exactly:1 grauman:1 scaled:1 ramanan:1 grant:1 positive:20 engineering:2 local:3 treat:3 tends:1 modify:1 oxford:1 ap:12 inria:7 r4:1 specifying:4 equivalence:1 range:2 jurie:1 breneman:1 practical:1 accumulator:1 acknowledgment:1 yj:10 testing:1 harmeling:1 procedure:2 significantly:1 vedaldi:2 word:7 bergeron:1 altun:1 unlabeled:1 tsochantaridis:2 put:1 risk:2 context:2 vijayanarasimhan:1 optimize:3 equivalent:1 map:4 demonstrated:1 missing:1 center:8 restriction:1 weighed:1 attention:1 starting:1 williams:3 convex:1 rectangular:3 handle:2 searching:1 coordinate:3 variation:1 analogous:3 ferrari:2 caption:2 exact:1 hypothesis:1 agreement:1 element:1 expensive:2 recognition:14 utilized:1 labeled:16 bottom:1 taskar:3 region:9 culotta:1 news:1 ordering:1 decrease:2 highest:1 rescaled:1 balanced:2 principled:1 schiele:1 constrains:1 pinz:2 fussenegger:1 trained:6 weakly:22 segment:1 negatively:3 localization:11 subdivides:1 basis:1 completely:2 joint:4 cat:7 train:1 shortcoming:3 approached:1 labeling:3 hillel:1 heuristic:1 solve:1 statistic:2 unseen:1 tuytelaars:1 noisy:2 indication:2 propose:5 ambiguously:1 remainder:1 achieve:1 deformable:1 academy:1 mair:1 validate:1 olkopf:2 cour:1 asymmetry:2 extending:1 optimum:1 produce:1 categorization:2 perfect:1 object:61 illustrate:1 andrew:2 develop:1 pose:1 exemplar:1 ij:5 received:1 edward:1 strong:1 implemented:1 come:1 indicate:1 bak:1 annotated:16 attribute:3 correct:1 modifying:1 subsequently:2 torre:1 human:2 mcallester:1 require:1 subdivided:1 carbonetto:2 generalization:2 around:1 considered:2 roi:1 mapping:1 predict:1 alexe:2 matthew:1 torralba:1 applicable:2 bag:5 label:12 council:1 mit:2 rather:1 avoid:1 varying:2 deselaers:3 validated:1 focus:1 june:2 joachim:5 improvement:2 properly:1 rank:4 indicates:3 contrast:3 sigkdd:1 kim:1 detect:2 inference:2 entire:1 perona:1 captioned:1 koller:2 frade:1 pantofaru:1 pixel:1 issue:1 classification:3 flexible:1 orientation:1 pascal:7 html:1 art:2 constrained:4 special:2 having:2 washington:1 sampling:1 manually:1 vasconcelos:1 yu:2 unsupervised:3 cancel:1 others:2 report:1 torresani:1 few:1 oriented:3 simultaneously:1 ve:1 resulted:1 recognize:2 delayed:1 occlusion:1 keerthi:1 attempt:2 detection:23 interest:8 mining:1 argmaxy:3 yielding:1 capable:1 partial:2 incomplete:2 minimal:1 instance:19 localizing:1 maximization:2 ott:1 cost:5 introducing:1 republic:1 subset:1 seventh:1 motivating:1 reported:1 buntine:1 endres:1 nickisch:1 combined:2 international:13 standing:1 regressor:1 ym:2 na:1 ambiguity:1 containing:1 worse:2 leading:1 rescaling:2 yp:3 rasiwasia:1 account:1 de:1 availability:1 pedestrian:13 forsyth:2 ranking:34 performed:2 try:2 doing:1 red:3 sort:4 annotation:27 contribution:2 appended:1 descriptor:1 efficiently:1 miller:1 yield:1 correspond:1 saliency:1 weak:30 bayesian:1 simultaneous:2 detector:6 manual:1 associated:2 proof:2 dataset:10 treatment:2 adjusting:1 popular:1 recall:12 color:2 knowledge:2 improves:2 sapp:1 segmentation:3 hilbert:1 auer:1 higher:2 supervised:18 violating:1 zisserman:6 maximally:8 formulation:14 box:35 though:1 generality:2 just:1 smola:1 implicit:2 hand:1 multiscale:1 reweighting:2 google:1 building:1 effect:1 hypothesized:1 contain:4 true:3 remedy:1 name:1 www:1 tagged:1 usa:1 pascalnetwork:1 semantic:1 undesired:1 attractive:1 reweighted:1 white:1 during:4 transitivity:1 bal:2 criterion:1 complete:2 performs:3 percent:1 image:98 recently:1 funding:1 common:1 superior:1 functional:3 empirically:4 volume:2 million:1 extend:2 slight:1 belong:1 discussed:1 kwk2:3 significant:2 tuning:1 outlined:1 erc:1 chapelle:1 scandinavian:1 supervision:12 longer:1 similarity:1 showed:2 perspective:2 optimizing:3 belongs:2 optimizes:2 binary:11 yi:15 exploited:1 scoring:3 seen:1 analyzes:1 additional:6 guestrin:1 employed:1 ii:1 branch:2 full:7 multiple:9 interdependency:1 reduces:1 sliding:1 long:1 retrieval:2 cccp:1 equally:2 prediction:4 variant:5 regression:2 heterogeneous:3 vision:26 histogram:7 kernel:4 pyramid:1 achieved:5 cell:5 penalize:1 background:1 addition:1 remarkably:1 fellowship:1 winn:2 source:1 appropriately:3 sch:2 rest:1 yue:1 induced:1 tend:1 incorporates:1 lafferty:1 jordan:1 prague:1 structural:8 presence:4 bengio:2 iterate:1 xj:8 handled:2 suffer:1 york:1 ignored:1 generally:2 useful:1 detailed:1 amount:6 singly:1 svms:6 category:4 http:1 specifies:1 percentage:1 chum:1 blue:4 group:2 localize:2 kept:1 sum:1 run:1 parameterized:1 almost:1 reasonable:1 draw:1 bound:4 internet:1 distinguish:2 constraint:20 vishwanathan:1 constrain:2 fei:2 scene:1 dominated:1 aspect:1 min:3 dork:1 conjecture:1 department:1 developing:1 structured:26 according:1 poor:1 hertz:1 across:4 slightly:2 larlus:1 modification:2 equation:11 slack:4 count:1 turn:1 describing:1 moosmann:1 know:1 fp7:1 end:5 available:3 generalizes:1 observe:1 leibe:1 generic:3 appropriate:1 spectral:1 robustness:1 original:1 top:1 denotes:1 ensure:2 include:1 subsampling:4 remaining:2 clustering:1 newton:1 exploit:1 giving:1 especially:1 objective:35 arrangement:1 quantity:1 added:2 strategy:1 primary:1 traditional:1 diagonal:1 gradient:4 detrimental:1 separate:1 link:1 thrun:1 concatenation:1 majority:1 discriminant:1 trivial:1 assuming:1 rother:1 length:1 index:5 providing:1 ratio:2 balance:1 kingdom:1 setup:1 hog:11 negative:39 clickthrough:1 twenty:2 allowing:1 imbalance:4 teh:1 markov:1 discarded:1 benchmark:2 enabling:1 relational:1 incorporated:1 precise:1 dc:1 varied:1 community:1 inferred:1 pair:1 engine:2 learned:1 czech:1 inaccuracy:1 address:3 beyond:2 able:2 bar:1 usually:1 pattern:12 xm:1 eighth:1 leonardis:1 challenge:3 convincingly:1 including:1 max:2 royal:1 pascal2:1 gool:2 treated:2 ranked:1 regularized:2 predicting:2 improve:4 voc2007:2 finley:2 extract:1 schmid:1 text:2 review:1 literature:1 discovery:2 prior:1 interdependent:1 relative:2 fully:19 loss:15 discriminatively:1 mixed:2 generation:2 proportional:1 annotator:1 sufficient:1 xp:4 consistent:4 editor:3 balancing:1 eccv:2 summary:1 supported:1 last:1 truncation:2 hebert:1 jth:1 bias:2 opelt:2 wide:1 saul:1 face:1 felzenszwalb:1 van:2 boundary:1 curve:9 world:1 computes:1 author:1 collection:1 made:2 subwindow:3 avoided:1 programme:1 nguyen:1 transaction:1 functionals:1 approximate:1 ignore:1 cutting:2 keep:1 global:3 active:1 discriminative:9 xi:12 alternatively:1 fergus:1 search:8 latent:20 iterative:1 sk:1 additionally:1 promising:1 learn:6 mj:1 robust:2 correlated:1 transfer:1 ignoring:1 schuurmans:2 bottou:1 complex:1 european:5 elegantly:1 bounding:33 lampert:8 positively:2 fashion:2 ny:1 precision:18 fails:3 inferring:1 sub:1 exponential:1 candidate:1 hw:23 theorem:1 british:2 specific:2 discarding:1 explored:1 svm:7 grouping:1 incorporating:2 intractable:1 workshop:3 false:1 effectively:2 conditioned:1 margin:4 simply:1 appearance:2 forming:1 visual:5 ordered:1 adjustment:1 corresponds:4 extracted:2 acm:2 goal:2 viewed:3 absence:4 content:1 bennett:1 included:2 specifically:1 correlational:1 total:1 uck:1 experimental:1 la:1 indicating:3 berg:2 support:4 people:1 latter:1 radlinski:1 unbalanced:1 violated:9 incorporate:1 evaluate:2 tested:1 handling:1 |
3,430 | 4,106 | Minimum Average Cost Clustering
Kiyohito Nagano
Institute of Industrial Science
University of Tokyo, Japan
[email protected]
Yoshinobu Kawahara
The Institute of Scientific and Industrial Research
Osaka University, Japan
[email protected]
Satoru Iwata
Research Institute for Mathematical Sciences
Kyoto University, Japan
[email protected]
Abstract
A number of objective functions in clustering problems can be described with
submodular functions. In this paper, we introduce the minimum average cost
criterion, and show that the theory of intersecting submodular functions can be
used for clustering with submodular objective functions. The proposed algorithm
does not require the number of clusters in advance, and it will be determined by
the property of a given set of data points. The minimum average cost clustering
problem is parameterized with a real variable, and surprisingly, we show that all
information about optimal clusterings for all parameters can be computed in polynomial time in total. Additionally, we evaluate the performance of the proposed
algorithm through computational experiments.
1
Introduction
A clustering of a finite set V of data points is a partition of V into subsets (called clusters) such
that data points in the same cluster are similar to each other. Basically, a clustering problem asks
for a partition P of V such that the intra-cluster similarity is maximized and/or the inter-cluster
similarity is minimized. The clustering of data is one of the most fundamental unsupervised learning
problems. We use a criterion function defined on all partitions of V , and the clustering problem
becomes that of finding a partition of V that minimizes the clustering cost under some constraints.
Suppose that the inhomogeneity of subsets of the data set is measured by a nonnegative set function
f : 2V ? R with f (?) = 0, where 2V denotes the set of all subsets of V , and the clustering cost
of a partition P = {S1 , S2 , . . . , Sk } is defined by f [P] = f (S1 ) + ? ? ? + f (Sk ). A number of set
functions that represent the inhomogeneity, including cut functions of graphs and entropy functions,
are known to be submodular [3, 4]. Throughout of this paper, we suppose that f is submodular, that
is, f (S) + f (T ) ? f (S ? T ) + f (S ? T ) for all S, T ? V . A submodular function is known to be
a discrete counterpart of a convex function, and in recent years, its importance has been recognized
in the field of machine learning.
For any given integer k with 1 ? k ? n, where n is the number of points in V , a partition P of
V is called a k-partition if there are exactly k nonempty elements in P, and is called an optimal
k-clustering if P is a k-partition that minimizes the cost f [P] among all k-partitions. A problem of
finding an optimal k-clustering is widely studied in combinatorial optimization and various fields,
and it is recognized as a natural formulation of a clustering problem [8, 9, 10]. Even if f is a cut
function of a graph, which is submodular and symmetric, that is, f (V ? S) = f (S) for all S ? V ,
this problem is known to be NP-hard unless k can be regarded as a constant [5]. Zhao et al. [17]
and Narasimhan et al. [10] dealt with the case when f is submodular and symmetric. Zhao et al.
[17] gave a 2(1?1/k)-approximation algorithm using Queyranne?s symmetric submodular function
minimization algorithm [13]. Narasimhan et al. [10] showed that Queyranne?s algorithm can be used
1
for clustering problems with some specific natural criteria. For a general submodular function and
a small constant k, constant factor approximation algorithms for optimal k-clusterings are designed
in [12, 18]. In addition, balanced clustering problems with submodular costs are studied in [8, 9].
Generally speaking, it is difficult to find an optimal k-clustering for any given k because the optimization problem is NP-hard even for simple special cases. Furthermore, the number of clusters
has to be determined in advance, regardless of the property of the data points, or an additional computation is required to find a proper number of clusters via some method like cross-validation. In
this paper, we introduce a new clustering criterion to resolve the above shortcomings of previous
approaches [10]. In the minimum average cost (MAC) clustering problem we consider, the objective function is the average cost of a partition P which combines the clustering cost f [P] and the
number of clusters |P|. Now the number of clusters is not pre-determined, but it will be determined
automatically by solving the combinatorial optimization problem. We argue that the MAC clustering
problem represents a natural clustering criterion. In this paper, we show that the Dilworth truncation
of an intersecting submodular function [2] (see also Chapter II of Fujishige [4] and Chapter 48 of
Schrijver [14]) can be used to solve the clustering problem exactly and efficiently. To the best of
our knowledge, this is the first time that the theory of intersecting submodular functions is used for
clustering. The MAC clustering problem can be parameterized with a real-valued parameter ? ? 0,
and the problem with respect to ? asks for a partition P of V that minimizes the average cost under
a constraint |P| > ?. The main contribution of this paper is a polynomial time algorithm that solves
the MAC clustering problem exactly for any given parameter ?. This result is in stark contrast to the
NP-hardness of the optimal k-clustering problems. Even more surprisingly, our algorithm computes
all information about MAC clusterings for all parameters in polynomial time in total.
In the case where f is a cut function of a graph, there are some related works. If f is a cut function
and ? = 1, the optimal value of the MAC clustering problem coincides with the strength of a graph
[1]. In addition, the computation of the principal sequence of partitions of a graph [7] is a special
case of the parametrized MAC clustering problem in an implicit way.
This paper is organized as follows. In Section 2, we formulate the minimum average cost clustering
problem, and show a structure property of minimum average cost clusterings. In Section 3, we
propose a framework of our algorithm for the minimum average cost clustering problem. In Section
4, we explain the basic results on the theory of intersecting submodular functions, and describe the
Dilworth truncation algorithm which is used in Section 3 as a subroutine. Finally, we show the result
of computational experiments in Section 5, and give concluding remarks in Section 6.
2
Minimum Average Cost Clustering
In this section, we give a definition of minimum average cost clusterings. After that, we show
a structure property of them. Let V be a given set of n data points, and let f : 2V ? R be a
nonnegative submodular function with f (?) = 0, which is not necessarily symmetric. For each
subset S ? V , the value f (S) represents the inhomogeneity of data points in S. For a partition
P = {S1 , . . . , Sk }, the clustering cost is defined by f [P] = f (S1 ) + ? ? ? + f (Sk ). We will
introduce the minimum average cost criterion in order to make consideration of both the clustering
cost f [P] and the number of clusters |P|.
2.1
Definition
Consider a k-partition P of V with k > 1, and compare P with a trivial partition {V } of V . Then,
the number of clusters has increased by k ? 1 and the clustering cost has increased by f [P] + c,
where c is a constant. Therefore, it is natural to define an average cost of P by f [P]/(|P| ? 1).
Suppose that P ? is a partition of V that minimizes the average cost among all partitions P of V with
|P| > 1. Remark that the number of clusters of P ? is determined not by us, but by the property of
the given data set. Therefore, it may be said that P ? is a natural clustering.
More generally, using a parameter ? ? [0, n) = {? ? R : 0 ? ? < n}, we define an extended
average cost by f [P]/(|P| ? ?). For any parameter ? ? [0, n), we consider the minimum average
cost (MAC) clustering problem
?(?) := min {f [P]/(|P| ? ?) : P is a partition of V , |P| > ?} .
P
2
(1)
Let us say that a partition P is a ?-MAC clustering if P is optimal for the problem (1) with respect
to ? ? [0, n). Naturally, the case where ? = 1 is fundamental. Furthermore, we can expect finer
clusterings for relatively large parameters. The problem (1) and the optimal k-clustering problem
[10] are closely related.
Proposition 1. Let P be a ?-MAC clustering for some ? ? [0, n), and set k := |P|. Then we have
f [P] ? f [Q] for any k-partition Q of V . In other words, P is an optimal k-clustering.
Proof. By definition, we have k > ? and f [P]/(k ? ?) ? f [Q]/(k ? ?) for any k-partition Q.
We will show that all information about ?-MAC clusterings for all parameters ? can be computed in
polynomial time in total. Our algorithm requires the help of the theory of intersecting submodular
functions [4, 14]. Proposition 1 says that if there exists a ?-MAC clustering P satisfying |P| = k,
then we obtain an optimal k-clustering. Note that this fact is consistent with the NP-hardness of the
optimal k-clustering problem because the information about MAC clusterings just gives a portion of
the information about optimal k-clusterings (k = 1, . . . , n).
2.2
Structure property
We will investigate the structure of all ?-MAC clusterings. Denote by R+ the set of nonnegative
real values. Let us choose a parameter ? ? [0, n). If P is a partition of V satisfying |P| ? ?, we
have ??? ? ?|P|? ? f [P] ? |P|? for all ? ? R+ . Hence the minimum average cost ?(?) defined
in (1) is represented as
?(?) = max{? ? R+ : ? ? f [P]/(|P| ? ?) for all partition P of V with |P| > ?}
= max{? ? R+ : ??? ? f [P] ? |P|? for all partition P of V }
= max{? ? R+ : ??? ? h(?)},
(2)
where h : R+ ? R is defined by
h(?) = min{f [P] ? |P|? : P is a partition of V } (? ? 0).
(3)
P
The function h does not depend on the parameter ?. For ? ? 0, we say that a partition P determines
h at ? if f [P] ? |P|? = h(?). Apparently, the minimization problem (3) is difficult to solve for any
given ? ? 0. This point will be discussed in Section 4 in detail.
Let us examine properties of the function h. For each partition P of V , define a linear function
hP : R+ ? R as hP (?) = f [P] ? |P|?. Since h is the minimum of these linear functions, h is
a piecewise-linear concave function on R+ . The function h is illustrated in Figure 1 by the thick
curve. We have h(0) = f (V ) because f [{V }] ? f [P] for any partition P of V . Moreover, it is easy
to see that the set of singletons {{1}, {2}, . . . , {n}} determines h at a sufficiently large ?. In view
of (2), the minimum average cost ?(?) can be obtained by solving the equation ??? = h(?) (see
also Figure 1). In addition, a ?-MAC clustering can be characterized as follows.
Lemma 2. Given a parameter ? ? [0, n), let P be a partition of V such that |P| > ? and
h(?(?)) = f [P] ? |P|?(?). Then P is a ?-MAC clustering.
Proof. Since ???(?) = h(?(?)) = f [P] ? |P|?(?), we have ?(?) = f [P]/(|P| ? ?). For
any partition Q of V with |Q| > ?, we have ???(?) ? f [Q] ? |Q|?(?), and thus ?(?) ?
f [Q]/(|Q| ? ?). Therefore, P is a ?-MAC clustering.
hP (?) = f [P] - |P| ?
h (?)
h (?)
Ps1
h (?)
Ps2
0
? ??
0
? (0)
?
Ps3
?
f (V )
I1
0
Ps4
Ps5
?(?)
(a)
I2
I3
I4 ?
B1
B2
B3
(b)
Figure 2: The structure of h
Figure 1: The function h
Now, we will present a structure property of the MAC problem (1). Suppose that the slopes of h
take values ?s1 > ?s2 > ? ? ? > ?sd . As {s1 , s2 , . . . , sd } ? {1, . . . , n}, we have d ? n. The
3
interval R+ is split into d subintervals R1 = [0, ?1 ), R2 = [?1 , ?2 ), . . . , Rd = [?d?1 , +?)
such that, for each j = 1, . . . , d, the function h is linear and its slope is ?sj on Rj . Let
Ps1 , Ps2 , . . . , Psd be partitions of V such that, for each j = 1, . . . , d, the partition Psj determines h at all ? ? Rj (see Figure 2 (a)). In particular, sd = n and the last partition Psd is the set
of singletons {{1}, {2}, . . . , {n}}. Observe that the range I of the minimum average costs ?(?) is
I = [?(0), +?). Suppose that j ? is an index such that ?(0) ? Rj ? . Let d? = d ? j ? + 1, and let
??j = ?j+j??1 and s?j = sj+j??1 for each j = 1, . . . , d? . The interval I is split into d? subintervals
I1 = [?(0), ??1 ), I2 = [??1 , ??2 ), . . . , Id? = [??d? ?1 , +?). Accordingly, the domain of ? is split
into d? subintervals B1 = [0, ?1 ), B2 = [?1 , ?2 ), . . . , Bd = [?d? ?1 , n), where ?j = ?h(??j )/??j
for each j = 1, . . . , d? ? 1. Figure 2 (b) illustrates these two sets of subintervals {I1 , . . . , Id? } and
{B1 , . . . , Bd? }. By Lemma 2, we directly obtain the structure property of the MAC problem (1):
Lemma 3. Let j ? {1, . . . , d? }. For any ? ? Bj , the partition Ps?j is a ?-MAC clustering.
Lemma 3 implies that if we can find the collection {Ps1 , Ps2 , . . . , Psd }, then the MAC problem
(1) will be completely solved. In the subsequent sections, we will give an algorithm that computes
the collection {Ps1 , Ps2 , . . . , Psd } in polynomial time in total.
3
The clustering algorithm
In this section, we present a framework of a polynomial time algorithm that finds the collection
{Ps1 , Ps2 , . . . , Psd } defined in ?2.2. That is, our algorithm computes all the breakpoints of the
piecewise linear concave function h defined in (3). By Lemma 3, we can immediately construct a
polynomial time algorithm that solves the MAC problem (1) completely.
The proposed algorithm uses the following procedure F IND PARTITION, which will be described in
Section 4 precisely.
Procedure F IND PARTITION(?): For any given ? ? 0, this procedure computes the value
h(?) and finds a partition P of V that determines h at ?.
We will use SFM(n) to denote the time required to minimize a general submodular function defined
on 2V , where n = |V |. Submodular function minimization can be solved in polynomial time
(see [6]). Although the minimization problem (3) is apparently hard, we show that the procedure
F IND PARTITION can be designed to run in polynomial time.
Lemma 4. For any ? ? 0, the procedure F IND PARTITION(?) runs in O(n ? SFM(n)).
The proof of Lemma 4, which will be given in ?4, utilizes the Dilworth truncation of an intersecting
submodular function [4, 14].
Let us call a partition P of V supporting if there exists ? ? 0 such that h(?) = hP (?). By definition, each Psj is supporting. In addition, for any ? ? 0, F IND PARTITION(?) returns a supporting
partition of V . Set Q1 := {V } and Qn := {{1}, {2}, . . . , {n}}. Q1 is a supporting partition of V
because h(0) = f [{V }] = hQ1 (0), and Qn is also supporting because Qn = Psd . For a supporting
partition P of V , if |P| = sj for some j ? {1, . . . , d}, then we can put Psj = P. For integers
1 ? k < ? ? n, define R(k, ?) = {? ? R+ : ?k ? ?+ h(?), and ?? h(?) ? ??}, where ?+ h
and ?? h are the right and left derivatives of h, respectively, and we set ?? h(0) = 0. Observe that
R(k, ?) is an interval in R+ . All breakpoints of h are included in R(1, n) = R+ .
Suppose that we are given two supporting partitions Qk and Q? such that |Qk | = k, |Q? | = ?
and k < ?. We describe the algorithm S PLIT(Qk , Q? ), which computes the information about all
breakpoints of h on the interval R(k, ?). This algorithm is a recursive one. First of all, the algorithm
S PLIT decides whether ?k = sj and ? = sj+1 for some j ? {1, . . . , d ? 1}? or not. Besides, if
the decision is negative, the algorithm finds a supporting partition Qm such that |Qm | = m and
k < m < ?. If the decision is positive, there is exactly one breakpoint on the interior of R(k, ?),
which can be given by Qk and Q? . Now we show how to execute these operations. For two linear
functions hQk (?) and hQ? (?), the equality hQk (?) = hQ? (?) holds at ? = (f [Q? ]?f [Qk ])/(??k).
Set h = hQk (?) = (?f [Qk ] ? kf [Q? ])/(? ? k). Clearly, we have h(?) ? h. The algorithm S PLIT
performs the procedure F IND PARTITION(?). Consider the case where h(?) = h (see Figure 3 (a)).
Then algorithm gives an affirmative answer, returns Qk and Q? , and stops. Next, consider the case
where h(?) < h (see Figure 3 (b)). Then the algorithm gives a negative answer, and the partition
4
P returned by F IND PARTITION is supporting and satisfies k < |P| < ?. We set m = |P| and
Qm = P. Finally, the algorithm performs S PLIT(Qk , Qm ) and S PLIT(Qm , Q? ).
h (?)
hQ (?)
?
h (?)
(?, h)
hQ (?)
?
(?, h)
?
0
?
0
hQ (?)
k
hQ (?)
k
(a)
(b)
Figure 3: Two different situations in S PLIT(Qk , Q? )
The algorithm S PLIT can be summarized as follows.
Algorithm S PLIT(Qk , Q? )
Input :
Supporting partitions of V , Qk and Q? such that |Qk | = k, |Q? | = ? and k < ?.
Output : The information about all breakpoints of h on the interval R(k, ?).
1:
2:
3:
Set ? := (f [Q? ] ? f [Qk ])/(? ? k), and set h := (?f [Qk ] ? kf [Q? ])/(? ? k). By performing
F IND PARTITION(?), compute h(?) and a partition P of V that determines h(?).
If h(?) = h (positive case), return Qk and Q? , and stop.
If h(?) < h (negative case), set m := |P|, Qm := P, and perform S PLIT(Qk , Qm ) and
S PLIT(Qm , Q? ).
By performing the algorithm S PLIT(Q1 , Qn ), where Q1 := {V } and Qn := {{1}, {2}, . . . , {n}},
the information of all breakpoints of h is obtained. Therefore, the collection {Ps1 , Ps2 , . . . , Psd }
defined in ?2.2 can be obtained. Let us show that this algorithm runs in polynomial time.
Theorem 5. The collection {Ps1 , Ps2 , . . . , Psd } can be computed in O(n2 ? SFM(n)) time. In
other words, the information of all breakpoints of h can be computed in O(n2 ? SFM(n)) time.
Proof. By Lemma 4, it suffices to show that the number of calls of the procedure F IND PARTITION
in the execution of S PLIT(Q1 , Qn ) is O(n). In the algorithm, after one call of F IND PARTITION,
(i) we can obtain the information about one breakpoint of h, or (ii) a new supporting partition Qm
can be obtained. Clearly, the number of breakpoints of h is at most n. Throughout the execution
of S PLIT(Q1 , Qn ), the algorithm computes a supporting k-partition at most once for each k ?
{1, . . . , n}. Therefore, F IND PARTITION is called at most 2n times in total.
The main theorem of this paper directly follows from Lemma 3 and Theorem 5.
Theorem 6. All information of optimal solutions to the minimum average cost clustering problem
(1) for all parameters ? ? [0, n) can be computed in O(n2 ? SFM(n)) time in total.
4
Finding a partition
In the clustering algorithm of Section 3, we iteratively call the procedure F IND PARTITION, which
computes h(?) defined in (3) and a partition P that determines h(?) for any given ? ? 0. In this
section, we will see that the procedure F IND PARTITION can be implemented to run in polynomial
time with the aid of the Dilworth truncation of an intersecting submodular function [2], and give a
proof of Lemma 4. The Dilworth truncation algorithm is sketched in the proof of Theorem 48.4 of
Schrijver [14], and the algorithm described in ?4.2 is based on that algorithm.
4.1
The Dilworth truncation of an intersecting submodular function
We start with definitions of an intersecting submodular function and the Dilworth truncation. Subsets
S, T ? V are intersecting if S ? T 6= ?, S \ T 6= ?, and T \ S 6= ?. A set function g : 2V ? R
is intersecting submodular if g(S) + g(T ) ? g(S ? T ) + g(S ? T ) for all intersecting subsets
S, T ? V . Clearly, the fully submodular function1 f is also intersecting submodular. For any ? ? 0,
1
To emphasize the difference between submodular and intersecting submodular functions, in what follows
we refer to a submodular function as a fully submodular function.
5
define f? : 2V ? R as follows: f? (S) = 0 if S = ?, and f? (S) = f (S) ? ? otherwise. It is easy to
see that f? is an intersecting submodular function.
n
For a fully submodular function f with f (?)
P = 0, consider a polyhedron P(f ) = {x ? R : x(S) ?
f (S), ? 6= ?S ? V }, where x(S) =
i?S xi . The polyhedron P(f ) is called a submodular
polyhedron. In the same manner, for an intersecting submodular function g with g(?) = 0, define
P(g) = {x ? Rn : x(S) ? g(S), ? 6= ?S ? V }. As for P(f ), for each nonempty subset S ? V ,
there exists a vector x ? P(f ) such that x(S) = f (S) by the validity of the greedy algorithm of
Edmonds [3]. On the other hand, the polyhedron P(g) does not necessarily satisfy such a property.
Alternatively, the following property is known.
Theorem 7 (Refer to Theorems 2.5, 2.6 of [4]). Given an intersecting submodular function g :
2V ? R with g(?) = 0, there exists a fully submodular function gb : 2V ? R such that gb(?) = 0
and P(b
g ) = P(g). Furthermore, the function gb can be represented as
P
gb(S) = min{ S?P g(S) : P is a partition of S}.
(4)
The function gb in Theorem 7 is called the Dilworth truncation of g. If g is fully submodular, for
each S ? V , {S} is an optimal solution to the RHS of (4) and we have gb(S) = g(S). For a general
intersecting submodular function g, however, the computation of gb(S) is a nontrivial task.
Let us see a small example. Suppose that a fully submodular function f : 2{1, 2} ? R satisfies
f (?) = 0, f ({1}) = 12, f ({2}) = 8, and f ({1, 2}) = 19. Set ? = 2. There is no vector x ? P(f? )
such that x({1, 2}) = f? ({1, 2}). The Dilworth truncation fb? : 2V ? R defined by (4) satisfies
fb? (S) = f? (S) for S ? {?, {1}, {2}}, and fb? ({1, 2}) = f? ({1}) + f? ({2}) = 16. Observe that
fb? is fully submodular and P(fb? ) = P(f? ). Figure 4 illustrates these polyhedra.
x2
17
8
0
x2
x2
19
16
6
P(f )
x1
12
0
6
x1
P(f? )
10
0
P(fb? )
x1
10
Figure 4: Polyhedra P(f ), P(f? ), and P(fb? )
4.2
x2
x2
6
6
10 e1
x0
0
x1
10 x1
x0
0
x2
6 e2
x1
10 x1
Figure 5: The greedy algorithm [3]
Algorithm that finds a partition
Let us fix ? ? 0, and describe F IND PARTITION(?). In view of equations (3), (4) and the definition
of fb? , we obtain h(?) = fb? (V ) using the Dilworth truncation of f? . We ask for a partition P of V
P
satisfying fb? (V ) = f? [P] (=
f? (T )) because such a partition P of V determines h at ?.
T ?P
We know that fb? : 2V ? R is submodular, but fb? (S) = min{f? [P] : P is a partition of S} cannot
be obtained directly for each S ? V . To evaluate fb? (V ), we will use the greedy algorithm of
Edmonds [3]. Denote the set of all extreme points of P(fb? ) ? Rn by ex(P(fb? )). In the example
of ?4.1, we have ex(P(fb? )) = {(10, 6)}. We set x0 ? Rn in such a way that x0 ? y for all
P
y ? ex(P(fb? )). For example, set x0i := ?M for each i ? V , where M = ? + j?V {|f ({j})| +
|f (V ) ? f (V ? {j})|}. For each i ? V , let ei denote the i-th unit vector in Rn .
Let L = (i1 , . . . , in ) be any ordering of V , and let V ? = {i1 , . . . , i? } for each ? = 1, . . . , n.
Now we describe the framework of the greedy algorithm [3]. In the ?-th iteration (? = 1, . . . , n),
we compute ?? := max{? : x??1 + ? ? ei? ? P(fb? )} and set x? := x??1 + ?? ? ei? . Finally, the
algorithm returns z := xn . Figure 5 illustrates this process. By the following property, we can use
the greedy algorithm to evaluate the value h(?) = fb? (V ).
Theorem 8 ([3]). For each ? = 1, . . . , n, we have fb? (V ? ) = x? (V ? ) = z(V ? ).
6
Let us see that the greedy algorithm with fb? can be implemented to run in polynomial time. We
discuss how to compute ?? in each iteration. Since x??1 ? P(fb? ) and P(fb? ) = P(f? ), we have
?? = max{? : x??1 + ? ? ei? ? P(f? )} = max{? : x??1 (S) + ? ? f? (S), i? ? ?S ? V }
= min{f (S) ? x??1 (S) ? ? : i? ? ?S ? V }
= min{f (S) ? x??1 (S) ? ? : i? ? ?S ? V ? },
(5)
where the last equality holds because of the choice of the initial vector x0 (remark that x??1
= x0i for
i
?
?
all i ? V ? V ). Hence, the value ? can be computed by minimizing a fully submodular function.
It follows from Theorem 8 that the value h(?) = fb? (V ) can be computed in O(n ? SFM(n)) time.
In addition to the value h(?), a partition P of V such that f [P] ? ?|P| = h(?) is also required. For
this purpose, we modify the above greedy algorithm, and obtain the procedure F IND PARTITION.
Procedure F IND PARTITION(?)
Input :
A nonnegative real value ? ? 0.
Output : A real value h? and a partition P? of V .
1:
2:
3:
Set P 0 := ?.
For each ? = 1, . . . , n, do:
Compute ?? = min{f (S) ? x??1 (S) ? ? : i? ? ?S ? V ? };
Find a subset T ? such that i? ? T ? ? V ? and f (T ? ) ? x??1 (T ? ) ? ? = ?? ;
Set x? := x??1 + ?? ? ei? , set U ? := T ? ? [ ?{S : S ? P ??1 , T ? ? S 6= ?}], and set
P ? := {U ? } ? {S : S ? P ??1 , T ? ? S = ?}.
Return h? := z(V ) and P? := P n .
Basically, this procedure F IND PARTITION(?) is the
same algorithm as the above greedy algorithm. But
now, we compute P ? in each iteration. Figure 6 shows
the computation of P ? in the ?-th iteration of the procedure F IND PARTITION(?). For each ? = 1, . . . , n,
P ? is a partition of V ? = {i1 , . . . , i? }. Thus, P? is a
partition of V .
i?
P ??1
T?
U?
P?
Figure 6: Computation of P ?
Let x be a vector in P(f? ). We say that a subset S ? V is x-tight (with respect to f? ) if f? (S) =
x(S). By the intersecting submodularity of f? , if S and T are intersecting and both S and T are
x-tight, then S ? T is also x-tight. Using this property, we obtain the following property.
Lemma 9. For each ? = 1, . . . , n, we have fb? (V ? ) = x? (V ? ) = f? [P ? ].
Proof. (Sketch) For each ? = 1, . . . , n, observe that T ? is x? -tight. Thus, we canPshow by induction that any cluster in P ? is x? -tight for each ? = 1, . . . , n. Thus, f? [P ? ] = S?P ? f? (S) =
P
?
?
?
?
?
b ?
S?P ? x (S) = x (V ). Moreover, the equality f? (V ) = x (V ) follows from Theorem 8.
The procedure F IND PARTITION(?) returns h? ? R and P? . By Theorem 8, we have h? = h(?), and
by Lemma 9, we have fb? (V ) = f? [P? ], and thus the partition P? of V determines h(?). Clearly,
the procedure runs in O(n ? SFM(n)) time. So, in the end, we completed the proof of Lemma 4.
5
5.1
Experimental results
Illustrative example
We first illustrate the proposed algorithm using two artificial datasets depicted in Figure 7. The above
dataset is generated from four Gaussians with unit variance (whose centers are located at (3,3), (3,3), (-3,3) and (-3,-3), respectively), and the below one consists of three cycles with different radii
with a line. The numbers of samples in these examples are 100 and 310, respectively. Figure 7
shows the typical examples of partitions calculated through Algorithm S PLIT given in Section 3.
Now the function f is a cut function of a complete graph and the weight of each edge of that graph
is determined by the Gaussian similarity function [15]. The values of ? above the figures are the
7
? = 0.19
? = 0.54
? = 5.21
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? = 0.87
?
?
? = 3.22
?
?
?
?
?
?
?
?
?
?
?
? = 4.90
?
?
?
?
?
?
?
?
?
Figure 7: Illustrative examples with datasets from four Gaussians (above) and three circles (below).
ones identified as breakpoints. Note several partitions other than shown in the figures were obtained
through one execution of Algorithm S PLIT. As can be seen, the algorithm produced several different
sizes of clusters with inclusive relations.
5.2
Empirical comparison
Next, in this subsection, we empirically compare the performance of the algorithm with the existing algorithms using several synthetic and real world datasets from the UCI repository. The
compared algorithms are k-means method, spectral-clustering method with normalized-cut [11] and
maximum-margin clustering [16], and we used cut functions as the objective functions for the MAC
clustering algorithm. The three UCI datasets used in this experiment are ?Glass?, ?Iris? and ?Libras?
which respectively consist of 214, 150 and 360 samples, respectively. For the existing algorithms,
the number of clusters was selected through 5-fold cross-validation (again note that our algorithm
needs no such hyper-parameter tuning). Table 1 shows the clustering accuracy when applying the
algorithms to two artificial (stated in Subsection 5.1 and three UCI datasets. For our algorithm, the
results with the best performance between among several partitions are shown. As can be seen, our
algorithm seems to be competitive with the existing leading algorithms for these datasets.
k-means
normalized cut
maximum margin
minimum average
Gaussian
1.0
0.88
0.99
0.99
Circle
0.88
0.86
1.0
1.0
Iris
0.79
0.84
0.96
0.99
Libras
0.85
0.87
0.90
0.97
Glass
0.93
0.93
0.97
0.97
Table 1: Clustering accuracy for the proposed and existing algorithms.
6
Concluding remarks
We have introduced the new concept, the minimum average cost clustering problem. We have shown
that the set of minimum average cost clusterings has a compact representation, and if the clustering
cost is given by a submodular function, we have proposed a polynomial time algorithm that compute
all information about minimum average cost clusterings. This result contrasts sharply with the NPhardness of the optimal k-clustering problem [5]. The present paper reinforced the importance of
the theory of intersecting submodular functions from the viewpoint of clustering.
Acknowledgments
This work is supported in part by JSPS Global COE program ?Computationism as a Foundation for
the Sciences?, KAKENHI (20310088, 22700007, and 22700147), and JST PRESTO program. We
would also like to thank Takuro Fukunaga for his helpful comments.
8
References
[1] W. H. Cunningham: Optimal attack and reinforcement of a network. Journal of the ACM 32
(1985), pp. 549?561.
[2] R. P. Dilworth: Dependence relations in a semimodular lattice. Duke Mathematical Journal,
11 (1944), pp. 575?587.
[3] J. Edmonds: Submodular functions, matroids, and certain polyhedra. Combinatorial Structures
and Their Applications, R. Guy, H. Hanani, N. Sauer, and J. Sch?onheim, eds., Gordon and
Breach, 1970, pp. 69?87.
[4] S. Fujishige: Submodular Functions and Optimization (Second Edition). Elsevier, Amsterdam,
2005.
[5] O. Goldschmidt and D. S. Hochbaum: A polynomial algorithm for the k-cut problem for fixed
k, Mathematics of Operations Research, 19 (1994), pp. 24?37.
[6] S. Iwata: Submodular function minimization. Mathematical Programming, 112 (2008), pp.
45?64.
[7] V. Kolmogorov: A faster algorithm for computing the principal sequence of partitions of a
graph. Algorithmica 56, pp. 394-412.
[8] Y. Kawahara, K. Nagano, and Y. Okamoto: Submodular fractional programming for balanced
clustering. Pattern Recognition Letters, to appear.
[9] M. Narasimhan and J. Bilmes: Local search for balanced submodular clusterings. In Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI 2007), pp.
981?986.
[10] M. Narasimhan, N. Jojic, and J. Bilmes: Q-clustering. In Advances in Neural Information
Processing Systems, 18 (2006), pp. 979?986. Cambridge, MA: MIT Press.
[11] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm.
Advances in neural information processing systems, 2:849?856, 2002.
[12] K. Okumoto, T. Fukunaga, and H. Nagamochi: Divide-and-conquer algorithms for partitioning
hypergraphs and submodular systems. In Proceedings of the 20th International Symposium on
Algorithms and Computation (ISAAC 2009), LNCS 5878, 2009, pp. 55?64.
[13] M. Queyranne: Minimizing symmetric submodular functions, Mathematical Programming, 82
(1998), pp. 3?12.
[14] A. Schrijver: Combinatorial Optimization ? Polyhedra and Efficiency. Springer-Verlag, 2003.
[15] U. von Luxburg: Tutorial on spectral clustering. Statistics and Computing 17 (2007), pp. 395?
416.
[16] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. Advances in
neural information processing systems, 17:1537?1544, 2005.
[17] L. Zhao, H. Nagamochi, and T. Ibaraki: Approximating the minimum k-way cut in a graph via
minimum 3-way cuts. Journal of Combinatorial Optimization, 5 (2001), pp. 397?410.
[18] L. Zhao, H. Nagamochi, and T. Ibaraki: A unified framework for approximating multiway
partition problems. In Proceedings of the 12th International Symposium on Algorithms and
Computation (ISAAC 2001), LNCS 2223, 2001, pp. 682?694.
9
| 4106 |@word repository:1 polynomial:14 seems:1 q1:6 asks:2 initial:1 psj:3 existing:4 bd:2 subsequent:1 partition:82 designed:2 greedy:8 selected:1 intelligence:1 accordingly:1 attack:1 mathematical:4 symposium:2 consists:1 combine:1 manner:1 introduce:3 x0:5 inter:1 hardness:2 examine:1 automatically:1 resolve:1 becomes:1 moreover:2 what:1 minimizes:4 affirmative:1 narasimhan:4 ps2:7 finding:3 unified:1 concave:2 exactly:4 qm:9 partitioning:1 unit:2 appear:1 positive:2 local:1 modify:1 sd:3 id:2 studied:2 range:1 acknowledgment:1 recursive:1 procedure:15 lncs:2 empirical:1 pre:1 word:2 cannot:1 interior:1 satoru:1 put:1 applying:1 center:1 regardless:1 convex:1 formulate:1 immediately:1 regarded:1 osaka:2 his:1 suppose:7 duke:1 programming:3 us:1 element:1 satisfying:3 recognition:1 located:1 cut:11 solved:2 cycle:1 ordering:1 balanced:3 depend:1 solving:2 tight:5 efficiency:1 completely:2 hqk:3 joint:1 represented:2 various:1 chapter:2 kolmogorov:1 shortcoming:1 describe:4 artificial:3 hyper:1 kawahara:3 whose:1 widely:1 solve:2 valued:1 say:4 otherwise:1 statistic:1 inhomogeneity:3 sequence:2 neufeld:1 propose:1 uci:3 nagano:3 ijcai:1 cluster:15 p:1 r1:1 help:1 illustrate:1 ac:3 measured:1 x0i:2 solves:2 implemented:2 implies:1 submodularity:1 thick:1 closely:1 tokyo:2 radius:1 jst:1 libra:2 require:1 suffices:1 fix:1 proposition:2 hold:2 sufficiently:1 bj:1 sanken:1 purpose:1 combinatorial:5 minimization:5 mit:1 clearly:4 gaussian:2 i3:1 kakenhi:1 polyhedron:8 industrial:2 contrast:2 glass:2 helpful:1 elsevier:1 cunningham:1 relation:2 subroutine:1 i1:6 sketched:1 among:3 special:2 field:2 construct:1 once:1 ng:1 represents:2 unsupervised:1 breakpoint:2 minimized:1 np:4 piecewise:2 gordon:1 algorithmica:1 psd:8 investigate:1 intra:1 extreme:1 edge:1 sauer:1 unless:1 divide:1 circle:2 increased:2 ar:1 lattice:1 cost:32 mac:23 subset:9 jsps:1 answer:2 synthetic:1 fundamental:2 international:3 plit:15 intersecting:21 again:1 von:1 choose:1 guy:1 zhao:4 derivative:1 return:6 stark:1 leading:1 japan:3 singleton:2 b2:2 summarized:1 satisfy:1 view:2 apparently:2 portion:1 start:1 competitive:1 slope:2 contribution:1 minimize:1 accuracy:2 qk:16 variance:1 efficiently:1 maximized:1 reinforced:1 dealt:1 produced:1 basically:2 bilmes:2 finer:1 explain:1 ed:1 definition:6 pp:13 isaac:2 e2:1 naturally:1 proof:8 okamoto:1 stop:2 dataset:1 ask:1 knowledge:1 subsection:2 fractional:1 organized:1 wei:1 formulation:1 execute:1 onheim:1 furthermore:3 just:1 implicit:1 hand:1 sketch:1 ei:5 scientific:1 b3:1 validity:1 normalized:2 concept:1 counterpart:1 hence:2 equality:3 jojic:1 symmetric:5 iteratively:1 i2:2 illustrated:1 ind:19 illustrative:2 larson:1 coincides:1 iris:2 criterion:6 complete:1 performs:2 consideration:1 empirically:1 function1:1 jp:3 discussed:1 hypergraphs:1 refer:2 cambridge:1 rd:1 tuning:1 mathematics:1 hp:4 submodular:50 multiway:1 similarity:3 recent:1 showed:1 verlag:1 certain:1 seen:2 minimum:22 additional:1 recognized:2 ii:2 rj:3 kyoto:2 faster:1 characterized:1 cross:2 e1:1 basic:1 iteration:4 represent:1 hochbaum:1 addition:5 interval:5 sch:1 comment:1 fujishige:2 jordan:1 integer:2 call:4 split:3 easy:2 ps1:7 ibaraki:2 gave:1 identified:1 whether:1 gb:7 queyranne:3 returned:1 speaking:1 remark:4 generally:2 tutorial:1 edmonds:3 discrete:1 four:2 graph:9 year:1 run:6 luxburg:1 parameterized:2 letter:1 throughout:2 utilizes:1 decision:2 sfm:7 breakpoints:8 fold:1 nonnegative:4 nontrivial:1 strength:1 i4:1 constraint:2 precisely:1 sharply:1 inclusive:1 x2:6 min:7 concluding:2 fukunaga:2 performing:2 relatively:1 s1:6 equation:2 discus:1 nonempty:2 know:1 end:1 presto:1 operation:2 gaussians:2 observe:4 spectral:3 takuro:1 denotes:1 clustering:77 completed:1 coe:1 conquer:1 approximating:2 objective:4 dependence:1 said:1 hq:6 thank:1 parametrized:1 argue:1 trivial:1 induction:1 besides:1 index:1 minimizing:2 difficult:2 negative:3 stated:1 proper:1 perform:1 datasets:6 finite:1 supporting:12 situation:1 extended:1 rn:4 introduced:1 required:3 below:2 pattern:1 program:2 including:1 max:6 natural:5 breach:1 kf:2 fully:8 expect:1 validation:2 foundation:1 consistent:1 viewpoint:1 surprisingly:2 last:2 truncation:10 supported:1 institute:3 matroids:1 curve:1 calculated:1 xn:1 world:1 computes:7 qn:7 fb:26 collection:5 reinforcement:1 sj:5 emphasize:1 compact:1 hanani:1 global:1 decides:1 sat:1 b1:3 xi:1 dilworth:11 alternatively:1 search:1 sk:4 kiyohito:1 additionally:1 table:2 nagamochi:3 yoshinobu:1 subintervals:4 schuurmans:1 necessarily:2 domain:1 main:2 rh:1 s2:3 edition:1 n2:3 x1:7 xu:1 aid:1 theorem:12 specific:1 r2:1 exists:4 consist:1 importance:2 execution:3 illustrates:3 margin:3 entropy:1 depicted:1 amsterdam:1 springer:1 iwata:3 determines:8 satisfies:3 acm:1 ma:1 hard:3 included:1 determined:6 typical:1 principal:2 lemma:13 total:6 called:6 schrijver:3 experimental:1 evaluate:3 ex:3 |
3,431 | 4,107 | Implicit Differentiation by Perturbation
Justin Domke
Rochester Institute of Technology
[email protected]
Abstract
This paper proposes a simple and efficient finite difference method for implicit differentiation of marginal inference results in discrete graphical models. Given an arbitrary loss function, defined on marginals, we show that
the derivatives of this loss with respect to model parameters can be obtained
by running the inference procedure twice, on slightly perturbed model parameters. This method can be used with approximate inference, with a
loss function over approximate marginals. Convenient choices of loss functions make it practical to fit graphical models with hidden variables, high
treewidth and/or model misspecification.
1
Introduction
As graphical models are applied to more complex problems, it is increasingly necessary to
learn parameters from data. Though the likelihood and conditional likelihood are the most
widespread training objectives, these are sometimes undesirable and/or infeasible in real
applications.
With low treewidth, if the data is truly distributed according to the chosen graphical model
with some parameters, any consistent loss function will recover those true parameters in the
high-data limit, and so one might select a loss function according to statistical convergence
rates [1]. In practice, the model is usually misspecified to some degree, meaning no "true"
parameters exist. In this case, different loss functions lead to different asymptotic parameter
estimates. Hence, it is useful to consider the priorities of the user when learning. For lowtreewidth graphs, several loss functions have been proposed that prioritize different types
of accuracy (section 2.2). For parameters ?, these loss functions are given as a function
?L
L(?(?)) of marginals ?(?). One can directly calculate ??
. The parameter gradient dL
d? can
be efficiently computed by loss-specific message-passing schemes[2, 3].
The likelihood may also be infeasible to optimize, due to the computational intractability
of computing the log-partition function or its derivatives in high treewidth graphs. On the
other hand, if an approximate inference algorithm will be used at test time, it is logical to
design the loss function to compensate for defects in inference. The surrogate likelihood
(the likelihood with an approximate partition function) can give superior results to the true
likelihood, when approximate inference is used at test time[4].
The goal of this paper is to efficiently fit parameters to optimize an arbitrary function of
predicted marginals, in a high-treewidth setting. If ?(?) is the function mapping parameters
to (approximate) marginals, and there is some loss function L(?) defined on those marginals,
we desire to recover dL
d? . This enables the use of the marginal-based loss functions mentioned
previously, but defined on approximate marginals.
There are two major existing approaches for calculating dL
d? . First, after performing inference, this gradient can be obtained by solving a large, sparse linear system[5]. The major
disadvantage of this approach is that standard linear solvers can perform poorly on large
1
True (y)
Noisy (x)
Surrogate
likelihood
Clique
likelihood
Univariate
likelihood
Smooth
class. error
Figure 1: Example images from the Berkeley dataset, along with marginals for a conditional
random field fit with various loss functions.
graphs, meaning that calculating this gradient can be more expensive than performing inference (Section 4). A second option is the Back Belief Propagation (BBP) algorithm[6]. This
is based on application of reverse-mode automatic differentiation (RAD) to message passing.
Crucially, this can be done without storing all intermediate messages, avoiding the enormous
memory requirements of a naive application of RAD. This is efficient, with running-time in
practice similar to inference. However, it is tied to a specific entropy approximation (Bethe)
and algorithm (Loopy Belief Propagation). Extension to similar message-passing algorithms
appears possible, but extension to more complex inference algorithms [7, 8, 9] is unclear.
Here, we observe that the loss gradient can be calculated by far more
straightforward means.
1
?L
Our basic result is extremely simple: dL
?
(?(?
+
r
)
?
?(?)
, with equality in the limit
d?
r
??
r ? 0. This result follows from, first, the well-known trick of approximating Jacobianvector products by finite differences and, second, the special property that for marginal
d?
inference, the Jacobian matrix d?
T is symmetric. This result applies when marginal inference
takes place over the local polytope with an entropy that is concave and obeys a minor
technical condition. It can also be used with non-concave entropies, with some assumptions
on how inference recovers different local optima. It is easy to use this to compute the
gradient of essentially any differentiable loss function defined on marginals. Effectively, all
one needs to do is re-run the inference procedure on a set of parameters slightly "perturbed"
in the direction ?L
?? . Conditional training and tied or nonlinear parameters can also be
accommodated.
One clear advantage of this approach is simplicity and ease of implementation. Aside from
this, like the matrix inversion approach, it is independent of the algorithm used to perform
independence, and applicable to a variety of different inference approximations. Like BBP,
the method is efficient in that it makes only two calls to inference.
2
2.1
Background
Marginal Inference
This section briefly reviews the aspects of graphical models and marginal inference that are
required for the rest of the paper. Let x denote a vector of discrete random variables. We
use the exponential family representation
p(x; ?) = exp ? ? f (x) ? A(?) ,
(1)
P
where f (x) is the features of the observation x, and A = log x exp ? ? f (x) assures normalization. For graphical models, f is typically a vector of indicator functions for each possible
configuration of each factor and variable. With a slight abuse of set notation to represent
2
a vector, this can be written as f (x) = {I[x? ]} ? {I[xi ]}. It is convenient to refer to the
components of vectors like those in Eq. 1 using function notation. Write ?(x? ) to refer to
the component of ? corresponding to the indicator function I[x? ], and similarly for ?(xi ).
This gives an alternative representation for p, namely
p(x; ?) = exp
X
?(x? ) +
?
X
i
?(xi ) ? A(?) .
(2)
Marginal inference means recovering the expected value of f or, equivalently, the marginal
probability that each factor or variable have a particular value.
X
?(?) =
p(x; ?)f (x)
(3)
x
Though marginals could, in principle, be computed by the brute-force sum in Eq. 3, it is
useful to consider the paired variational representation [10, Chapter 3]
A(?) =
?(?) =
dA
d?
max ? ? ? + H(?)
(4)
arg max ? ? ? + H(?),
(5)
??M
=
??M
in which A and ? can both be recovered from solving the same optimization problem. Here,
M = {?(?)|? ? ?n } is the marginal polytope? those marginals ? resulting from some
parameter vector ?. Similarly, H(?) is the entropy of p(x; ? ? ), where ?? is the vector of
parameters that produces the marginals ?.
As M is a convex set, and H a concave function, Eq. 5 is equivalent to a convex optimization
problem. Nevertheless it is difficult to characterize M or compute H(?) in high-treewidth
graphs. A variety of approximate inference methods can be seen as solving a modification
of Eqs. 4 and 5, with the marginal polytope and entropy replaced with tractable approximations. Notice that these are also paired; the approximate ? is the exact gradient of the
approximate A.
The commonest relaxation of M is the local polytope
X
X
L = {? ? 0 | ?(xi ) =
?(xi ) = 1}.
?(x? ),
x?\i
(6)
xi
This underlies loopy belief propagation, as well as tree-reweighted belief propagation. Since
a valid set of marginals must obey these constraints, L ? M. Note that since the equality
constraints are linear, there exists a matrix B and vector d such that
L = {? ? 0|B? = d}.
(7)
A variety of entropy approximations exist. The Bethe approximation implicit in loopy belief
propagation [11] is non-concave in general, which results in sometimes failing to achieve
the global optimum. Concave entropy functions include the tree-reweighted entropy [12],
convexified Bethe entropies [13], and the class of entropies obeying Heskes? conditions [14].
2.2
Loss Functions
Given some data, {?
x}, we will pick the parameters ? to minimize the empirical risk
X
L(?
x; ?).
(8)
?
x
Likelihood. The (negative) likelihood is the classic loss function for training graphical
models. Exploiting the fact that dA/d? = ?(?), the gradient is available in closed-form.
3
L(?
x; ?) =
=
? log p(?
x; ?)
?? ? f (?
x) + A(?).
dL
= ?f (?
x) + ?(?).
d?
(9)
(10)
Surrogate Likelihood. Neither A nor ? is tractable with high treewidth. However, if
written in variational form (Eqs. 4 and 5), they can be approximated using approximate inference. The surrogate likelihood [4] is simply the likelihood as in Eq. 9 with an approximate
A. It has the gradient as in Eq. 10, but with approximate marginals ?.
Unlike the losses below, the surrogate likelihood is convex when based on a concave inference
method. See Ganapathi et al.[15] for a variant of this for inference with local optima.
Univariate Likelihood. If the application will only make use of univariate marginals at
test time, one might fit parameters specifically to make these univariate marginals accurate.
Kakade et al.[3] proposed the loss
X
log ?(?
xi ; ?).
(11)
L(?
x; ?) = ?
i
This can be computed in treelike graphs, after running belief propagation to compute
marginals. A message-passing scheme can efficiently compute the gradient.
Univariate Classification Error. Some applications only use the maximum probability
marginals. Gross at al.[2] considered the loss
X
S max ?(xi ; ?) ? ?(?
xi ; ?) ,
(12)
L(?
x; ?) =
i
xi 6=x
?i
where S is the step function. This loss measures the number of incorrect components of
? if each is predicted to be the ?max marginal?. However, since this is non-differentiable,
x
it is suggested to approximate this by replacing S with a sigmoid function S(t) = (1 +
exp(??t))?1 , where ? controls the approximation quality. Our experiments use ? = 50.
As with the univariate likelihood, this loss can be computed if exact marginals are available.
Computing the gradient requires another message passing scheme.
Clique loss functions. One can easily define clique versions of the previous two loss
functions, where the summations are over ?, rather than i. These measure the accuracy of
clique-wise marginals, rather than univariate marginals.
2.3
Implicit Differentiation
As noted in Eq. 7, the equality constraints in the local polytope are linear, and hence when
the positivity constraint can be disregarded, approximate marginal inference algorithms can
be seen as solving the optimization ?(?) = arg max?,B?=d ? ? ? + H(?). Domke showed[5],
in our notation, that
dL
dL
= D?1 B T (BD?1 B T )?1 BD?1 ? D?1
,
(13)
d?
d?
where D =
?2H
????T
is the (diagonal) Hessian of the entropy approximation.
Unfortunately, this requires solving a sparse linear system for each training example and
iteration. As we will see below, with large or poorly conditioned problems, the computational
expense of this can far exceed that of inference. Note that BD?1 B T is, in general, indefinite,
restricting what solvers can be used. Another limitation is that D can be singular if any
counting numbers (Eq. 16) are zero.
2.4
Conditional training and nonlinear parameters.
For simplicity, all the above discussion was confined to fully parametrized models. Nonlinear
and tied parameters are easily dealt with by considering ?(?) to be a function of the ?true?
4
Algorithm 1 Calculating loss derivatives (two-sided).
1. Do inference. ?? ? arg max ? ? ? + H(?)
??M
?
2. At ? , calculate the partial derivative
?L
.
??
3. Calculate a perturbation size r.
4. Do inference on perturbed parameters.
?L
?+ ? arg max (? + r
) ? ? + H(?)
??M
??
?? ? arg max (? ? r
??M
?L
) ? ? + H(?)
??
1 +
dL
?
(? ? ?? )
d?
2r
5. Recover full derivative.
parameters ?. Once dL/d? is known dL/d? can be recovered by a simple application of
the chain rule, namely
dL
d?T dL
=
.
(14)
d?
d? d?
Conditional training is similar: define a distribution over a random variable y, parametrized
by ?(?; x), the derivative on a particular pair (x, y) is given again by Eq. 14. Examples of
both of these are in the experiments.
3
Implicit Differentiation by Perturbation
This section shows that when ?(?) = arg max??L ? ? ? + H(?), the loss gradient can be
computed by Alg. 1 for a concave entropy approximation of the form
H(?) =
?
X
?
c?
X
x?
?(x? ) log ?(x? ) ?
X
ci
i
X
?(xi ) log ?(xi ),
(15)
xi
when the counting numbers c obey (as is true of most proposed entropies)
X
c? > 0.
c? > 0, ci +
(16)
?,i??
For intuition, the following Lemma uses notation (?, ?, H) suggesting the application to
marginal inference. However, note that the result is true for any functions satisfying the
stated conditions.
Lemma. If ?(?) is implicitly defined by
?(?) =
s.t
arg max ? ? ? + H(?)
(17)
B? ? d = 0,
(18)
?
where H(?) is strictly convex and twice differentiable, then
d?
exists and is symmetric.
d? T
Proof. First, form a Lagrangian enforcing the constraints on the objective function.
L = ? ? ? + H(?) + ?T (B? ? d)
(19)
The solution is ? and ? such that dL/d? = 0 and dL/d? = 0.
? + ?H(?)/?? + B T ?
B? ? d
5
=
0
0
(20)
Recall the general implicit function theorem. If f (?) is implicitly defined by the constraint
that h(?, f ) = 0, then
?h ?1 ?h
df
=
?
.
(21)
?f T
d?T
??T
Using Eq. 20 as our definition of h, and differentiating with respect to both ? and ?, we
have
?1
2
I
d?/d?T
? H/????T B
.
(22)
=
?
0
BT
0
d?/d?T
We see that ?d?/d?T is the upper left block of the matrix being inverted. The result
follows, since the inverse of a symmetric matrix is symmetric.
The following is the main result driving this paper. Again, this uses notation suggesting
the application to implicit differentiation and marginal inference, but holds true for any
functions satisfying the stated conditions.
Theorem.
Let ?(?) be defined as in the previous Lemma, and let L(?) be defined by L(?) =
M ?(?) for some differentiable function M (?). Then the derivative of L with respect to ?
is given by
dL
?M
1
?(? + r
= lim
) ? ?(?) .
(23)
r?0 r
d?
??
Proof. First note that, by the vector chain rule,
d?T ?M
dL
=
.
d?
d? ??
(24)
Next, take some vector v. By basic calculus, the derivative of ?(?) in the direction of v is
1
d?
v = lim ?(? + rv) ? ?(?) .
T
r?0 r
d?
(25)
The result follows from substituting ?M/?? for v, and using the previous lemma to establish
that d?/d?T = d?T /d?.
Alg. 1 follows from applying this theorem to marginal inference. However, notice that this
does not enforce the constraint that ? ? 0. The following gives mild technical conditions
under which ? will be strictly positive, and so the above theorem applies.
P
P
Theorem. If H(?) = ? c? H(?c ) + i ci H(?i ), and ?? is a (possibly local) maximum
of ? ? ? + H(?), under the local polytope L, then
c? > 0, ci +
X
?,i??
c? > 0 ?? ?? > 0.
(26)
This is an extension of a previous result [11, Theorem 9] for the Bethe entropy. However,
extremely minor changes to the existing proof give this stronger result.
Most
the Bethe entropy (c? = 1, ci +
P
P proposed entropies satisfy these conditions, including
?,i?? c? = 1, where ?(?) > 0 is the
?,i?? c? = 1), the TRW entropy (c? = ?(?), ci +
probability that ? appears in a randomly chosen tree) and any entropy satisfying the slightly
strengthened versions on Heskes? conditions [14, 16, Section 2].
What about non-concave entropies? The only place concavity was used above was in establishing that Eq. 20 has a unique solution. With a non-concave entropy this condition is
still valid, not not unique, since there can be local optima. BBP essentially calculates this
6
3
10
Bethe entropy
2
2
10
running time (s)
running time (s)
Bethe entropy
3
10
1
10
0
10
?1
10
10
1
10
0
10
pert?BP
symmlq
BBP
direct
BP
?1
10
?2
?2
10
10
8
32
128
0.5
512
grid size
3
10
TRW entropy
2
TRW entropy
3
10
2
2
10
running time (s)
running time (s)
1
interaction strength
1
10
0
10
?1
10
10
1
10
0
10
pert?TRWS
symmlq
direct
TRWS
?1
10
?2
?2
10
10
8
32
128
0.5
512
grid size
1
interaction strength
2
Figure 2: Times to compute dL/d? by perturbation, Back Belief Propagation (BBP), sparse
matrix factorization (direct) and the iterative symmetric-LQ method (symmlq). Inference
with BP and TRWS are shown for reference. As these results use two-sided differences,
perturbation always takes twice the running time of the base inference algorithm. BBP takes
time similar BP. Results use a pairwise grid with xi ? {1, 2, ..., 5}, with univariate terms
?(xi ) taken uniformly from [?1, +1] and interaction strengths ?(xi , xj ) from [?a, +a] for
varying a. Top Left: Bethe entropy for varying grid sizes, with a = 1. Matrix factorization
is efficient on small problems, but scales poorly. Top Right: Bethe entropy with a grid
size of 32 and varying interaction strengths a. High interactions strengths lead to poor
conditioning, slowing iterative methods. Bottom Left: Varying grid sizes with the TRW
entropy. Bottom Right: TRW entropy with a grid size of 32 and varying interactions.
derivative by ?tracking? the local optima. If perturbed beliefs are calculated from constant
initial messages with a small step, one obtains the same result. Thus, BBP and perturbation
give the same gradient for the Bethe approximation. (This was also verified experimentally.)
It remains to select the perturbation size r. Though the gradient is exact in the limit
r ? 0, numerical error eventually dominates. Following Andrei[17], the experiments here
?
use r = ?(1 + |?|? )/| ?L
?? |? , where ? is machine epsilon.
4
Experiments
For inference, we used either loopy belief propagation, or tree-reweighted belief propagation.
As these experiments take place on grids, we are able to make use of the convergent TRWS
algorithm [18, Alg. 5], which we found to converge significantly faster than standard TRW.
BP/TRWS were iterated until predicted beliefs changed less than 10?5 between iterations.
BBP used a slightly looser convergence threshold of 10?4 , which was similarly accurate.
Base code was implemented in Python, with C++ extensions for inference algorithms for
efficiency. Sparse systems were solved directly using an interface to Matlab, which calls
LAPACK. We selected the Symmetric LQ method as an iterative solver. Both solvers were
the fastest among several tested on these problems. (Recall, the system is indefinite.) BBP
results were computed by interfacing to the authors? implementation included in the libDAI
toolkit[19]. We found the PAR mode, based on parallel updates [6, Eqs. 14-25] to be much
slower than the more sophisticated SEQ_FIX mode, based on sequential updates [6, extended
7
Table 1: Binary denoising results, comparing the surrogate likelihood against three loss
functions fit by implicit differentiation. All loss functions are per-pixel, based on treereweighted belief propagation with edge inclusion probabilities of .5. The ?Best Published?
results are the lowest previously reported pixelwise test errors using essentially loopy-belief
propagation based surrogate likelihood. (For all losses, lower is better.)
Test Loss
Training Loss
Surrogate likelihood
Clique likelihood
Univariate likelihood
Smooth Class. Error
Best Published [20]
Bimodal
Class.
Error
Gaussian
Berkeley Segmentation Data
Class.
Surrogate
Clique
Univariate
Class.
Error
likelihood likelihood likelihood
Error
Train Test
Train Test
Train Test
Train Test
Train Test
Train Test
.0498 .0540
.0286 .0239
.251 .252
1.328 1.330
.417 .416
.141 .140
.0488 .0535
.0278 .0236
.275 .277
1.176 1.178
.316 .315
.127 .126
.0493 .0541
.0278 .0235
.301 .303
1.207 1.210
.305 .305
.128 .127
.0460 .0527
.0273 .0241
.281 .283
1.179 1.181
.311 .310
.127 .126
.0548
.0251
version, Fig. 5]. Hence, all results here use the latter. Other modes exceeded the available
12 GB memory. All experiments use a single core of a 2.26 GHz machine.
Our first experiment makes use of synthetically generated grid models. This allows systematic variance of graph size and parameter strength. With the TRW entropy, we use uniform
edge appearance probabilities of ? = .49, to avoid singularity in D. Our results (Fig. 2) can
be summarized as follows. Matrix inversion (Eq. 13) with a direct solver is very efficient on
small problems, but scales poorly. The iterative solver is expensive, and extremely sensitive
to conditioning. With the Bethe approximation, perturbation performs similarly to BBP.
TRWS converges faster than BP on poorly conditioned problems.
The second experiment considers a popular dataset for learning in high-treewidth graphical
models[21]. This consists of four base images, each corrupted with 50 random noise patterns
(either Gaussian or bimodal). Following the original work, 10 corrupted versions of the
first base image are used for training, and the remaining 190 for testing. This dataset
has been used repeatedly [22, 23], though direct comparison is sometimes complicated by
varying model types and training/test set divisions. This experiment uses a grid model over
neighboring pairs (i, j)
X
X
?(yi ; xi ) ? A(?(x)) ,
(27)
?(yi , yj ) +
p(y|x) = exp
i
i,j
where ?(x) is a function of the input, with ?(yi , yj ) = a(yi , yj ) fully parametrized (independent of x) and ?(yi ; xi ) = b(yi )xi + c(yi ) an affine function of xi . Enforcing translation
invariance gives a total of eight free parameters: four for a(yi , yj ), and two for b(yi ), and
c(yi )1 . Once dL
d? is known, we can, following Eq. 14, recover derivatives with respect to tied
parameters2.
Because the previous dataset is quite limited (only four base 64x64 images), all methods
perform relatively well. Hence, we created a larger and more challenging dataset, consisting
of 200 200x300 images from the Berkeley segmentation dataset, split half for training and
testing. These are binarized by setting yi = 1 if a pixel is above the image mean, and yi = 0
otherwise. The noisy values xi are created by setting xi = yi (1 ? t1.25
) + (1 ? yi )ti1.25 , for ti
i
uniform on [0, 1].
Table 1 shows results for all three datasets. All the results below use batch L-BFGS for
learning, and uniform edge appearance probabilities of ? = .5. The surrogate likelihood
performs well, in fact beating the best reported results on the bimodal and Gaussian data.
However, the univariate and clique loss functions provide better univariate accuracy. Fig.
1 shows example results. The surrogate likelihood (which is convex), was used to initialize
the univariate and clique likelihood, while the univariate likelihood was used to initialize
the smooth classification error.
1
2
There are two redundancies,
as adding a constant to P
a(yi , yj ) or c(yi ) has no effect
P
P on p.
dL
dL
dL
dL
dL
dL
,
=
x
,
and
=
Specifically, da(y,y
i
?) =
?
(i,j) d?(yi =y,yj =y )
i d?(yi =y)
i d?(yi =y) .
db(y)
dc(y)
8
References
[1] Percy Liang and Michael Jordan. An asymptotic analysis of generative, discriminative, and
pseudolikelihood estimators. In ICML, 2008.
[2] Samuel Gross, Olga Russakovsky, Chuong Do, and Serafim Batzoglou. Training conditional
random fields for maximum labelwise accuracy. In NIPS. 2006.
[3] Sham Kakade, Yee Whye Teh, and Sam Roweis. An alternate objective function for Markovian
fields. In ICML, 2002.
[4] Martin Wainwright. Estimating the "wrong" graphical model: Benefits in the computationlimited setting. Journal of Machine Learning Research, 7:1829?1859, 2006.
[5] Justin Domke. Learning convex inference of marginals. In UAI, 2008.
[6] Frederik Eaton and Zoubin Ghahramani. Choosing a variable to clamp. In AISTATS, 2009.
[7] Max Welling and Yee Whye Teh. Belief optimization for binary networks: A stable alternative
to loopy belief propagation. In UAI, 2001.
[8] Tom Heskes, Kees Albers, and Bert Kappen. Approximate inference and constrained optimization. In UAI, 2003.
[9] Alan Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent
alternatives to belief propagation. Neural Computation, 14:2002, 2002.
[10] Martin Wainwright and Michael Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1?305, 2008.
[11] Jonathan Yedidia, William Freeman, and Yair Weiss. Constructing free energy approximations
and generalized belief propagation algorithms. IEEE Transactions on Information Theory,
51:2282?2312, 2005.
[12] Martin Wainwright, Tommi Jaakkola, and Alan Willsky. A new class of upper bounds on the
log partition function. IEEE Transactions on Information Theory, 51(7):2313?2335, 2005.
[13] Ofer Meshi, Ariel Jaimovich, Amir Globerson, and Nir Friedman. Convexifying the bethe free
energy. In UAI, 2009.
[14] Tom Heskes. Convexity arguments for efficient minimization of the bethe and kikuchi free
energies. J. Artif. Intell. Res. (JAIR), 26:153?190, 2006.
[15] Varun Ganapathi, David Vickrey, John Duchi, and Daphne Koller. Constrained approximate
maximum entropy learning of markov random fields. In UAI, 2008.
[16] Tamir Hazan and Amnon Shashua. Convergent message-passing algorithms for inference over
general graphs with convex free energies. In UAI, pages 264?273, 2008.
[17] Neculai Andrei. Accelerated conjugate gradient algorithm with finite difference hessian/vector
product approximation for unconstrained optimization. J. Comput. Appl. Math., 230(2):570?
582, 2009.
[18] Talya Meltzer, Amir Globerson, and Yair Weiss. Convergent message passing algorithms - a
unifying view, 2009.
[19] Joris M. Mooij et al. libDAI 0.2.4: A free/open source C++ library for Discrete Approximate
Inference. http://www.libdai.org/, 2010.
[20] Sanjiv Kumar, Jonas August, and Martial Hebert. Exploiting inference for approximate parameter learning in discriminative fields: An empirical study. In EMMCVPR, 2005.
[21] Sanjiv Kumar and Martial Hebert. Discriminative random fields. International Journal of
Computer Vision, 68(2):179?201, 2006.
[22] S. V. N. Vishwanathan, Nicol Schraudolph, Mark Schmidt, and Kevin Murphy. Accelerated
training of conditional random fields with stochastic gradient methods. In ICML, 2006.
[23] Patrick Pletscher, Cheng Soon Ong, and Joachim Buhmann. Spanning tree approximations
for conditional random fields. In AISTATS, 2009.
9
| 4107 |@word mild:1 version:4 briefly:1 inversion:2 stronger:1 open:1 calculus:1 crucially:1 serafim:1 pick:1 kappen:1 initial:1 configuration:1 existing:2 recovered:2 comparing:1 written:2 must:1 bd:3 john:1 numerical:1 partition:3 sanjiv:2 enables:1 update:2 aside:1 half:1 selected:1 generative:1 amir:2 slowing:1 core:1 math:1 org:1 treereweighted:1 daphne:1 along:1 direct:5 jonas:1 incorrect:1 consists:1 pairwise:1 expected:1 nor:1 freeman:1 talya:1 solver:6 considering:1 estimating:1 notation:5 lowest:1 what:2 differentiation:7 berkeley:3 binarized:1 ti:1 concave:9 wrong:1 brute:1 control:1 positive:1 t1:1 local:9 limit:3 mach:1 establishing:1 abuse:1 might:2 twice:3 challenging:1 appl:1 ease:1 factorization:2 fastest:1 limited:1 obeys:1 practical:1 unique:2 globerson:2 testing:2 yj:6 practice:2 block:1 procedure:2 empirical:2 significantly:1 convenient:2 batzoglou:1 zoubin:1 undesirable:1 risk:1 applying:1 yee:2 optimize:2 equivalent:1 www:1 lagrangian:1 straightforward:1 convex:7 simplicity:2 rule:2 estimator:1 classic:1 x64:1 user:1 exact:3 us:3 trick:1 trend:1 expensive:2 approximated:1 satisfying:3 bottom:2 solved:1 calculate:3 mentioned:1 gross:2 intuition:1 convexity:1 ong:1 solving:5 yuille:1 division:1 efficiency:1 easily:2 various:1 chapter:1 train:6 kevin:1 choosing:1 quite:1 larger:1 otherwise:1 noisy:2 advantage:1 differentiable:4 clamp:1 interaction:6 product:2 neighboring:1 poorly:5 achieve:1 roweis:1 exploiting:2 convergence:2 requirement:1 optimum:5 produce:1 converges:1 kikuchi:2 minor:2 albers:1 eq:15 recovering:1 predicted:3 implemented:1 treewidth:7 tommi:1 direction:2 stochastic:1 meshi:1 parameters2:1 frederik:1 summation:1 singularity:1 extension:4 strictly:2 hold:1 considered:1 exp:5 mapping:1 eaton:1 substituting:1 driving:1 major:2 failing:1 applicable:1 sensitive:1 minimization:1 interfacing:1 always:1 gaussian:3 rather:2 avoid:1 varying:6 jaakkola:1 joachim:1 likelihood:29 inference:40 treelike:1 typically:1 bt:1 hidden:1 koller:1 lapack:1 arg:7 classification:2 among:1 pixel:2 proposes:1 constrained:2 special:1 initialize:2 marginal:15 field:8 once:2 libdai:3 icml:3 randomly:1 intell:1 murphy:1 replaced:1 consisting:1 william:1 friedman:1 message:9 truly:1 chain:2 accurate:2 edge:3 partial:1 necessary:1 tree:5 accommodated:1 re:2 markovian:1 disadvantage:1 loopy:6 uniform:3 characterize:1 reported:2 pixelwise:1 perturbed:4 corrupted:2 international:1 systematic:1 michael:2 again:2 prioritize:1 possibly:1 positivity:1 priority:1 derivative:10 ganapathi:2 suggesting:2 bfgs:1 summarized:1 satisfy:1 chuong:1 view:1 closed:1 hazan:1 shashua:1 recover:4 option:1 parallel:1 complicated:1 rochester:1 minimize:2 accuracy:4 variance:1 efficiently:3 dealt:1 iterated:1 russakovsky:1 published:2 definition:1 against:1 energy:5 proof:3 recovers:1 dataset:6 popular:1 logical:1 recall:2 lim:2 segmentation:2 sophisticated:1 back:2 trw:7 appears:2 exceeded:1 jair:1 varun:1 tom:2 wei:2 done:1 though:4 implicit:8 until:1 hand:1 replacing:1 nonlinear:3 propagation:14 widespread:1 mode:4 quality:1 artif:1 effect:1 true:8 hence:4 equality:3 symmetric:6 vickrey:1 reweighted:3 noted:1 samuel:1 generalized:1 whye:2 performs:2 percy:1 interface:1 duchi:1 meaning:2 variational:3 wise:1 image:6 misspecified:1 sigmoid:1 superior:1 conditioning:2 slight:1 marginals:22 refer:2 automatic:1 unconstrained:1 grid:10 heskes:4 similarly:4 inclusion:1 convexified:1 toolkit:1 stable:1 base:5 patrick:1 showed:1 reverse:1 binary:2 yi:19 inverted:1 seen:2 converge:1 rv:1 full:1 sham:1 smooth:3 technical:2 faster:2 alan:2 schraudolph:1 compensate:1 labelwise:1 cccp:1 paired:2 calculates:1 underlies:1 basic:2 variant:1 emmcvpr:1 essentially:3 vision:1 df:1 iteration:2 sometimes:3 normalization:1 represent:1 bimodal:3 confined:1 background:1 singular:1 source:1 rest:1 unlike:1 db:1 trws:6 jordan:2 call:2 counting:2 synthetically:1 intermediate:1 exceed:1 easy:1 split:1 meltzer:1 variety:3 independence:1 fit:5 xj:1 ti1:1 amnon:1 gb:1 passing:7 hessian:2 repeatedly:1 matlab:1 useful:2 clear:1 http:1 exist:2 notice:2 per:1 discrete:3 write:1 redundancy:1 indefinite:2 four:3 nevertheless:1 enormous:1 threshold:1 neither:1 verified:1 graph:7 defect:1 relaxation:1 sum:1 run:1 inverse:1 place:3 family:2 looser:1 bound:1 convergent:4 cheng:1 strength:6 constraint:7 vishwanathan:1 bp:6 aspect:1 argument:1 extremely:3 kumar:2 performing:2 relatively:1 martin:3 according:2 alternate:1 poor:1 conjugate:1 slightly:4 increasingly:1 sam:1 kakade:2 modification:1 sided:2 taken:1 ariel:1 previously:2 assures:1 remains:1 eventually:1 tractable:2 available:3 ofer:1 yedidia:1 eight:1 observe:1 obey:2 enforce:1 alternative:3 batch:1 yair:2 slower:1 schmidt:1 original:1 top:2 running:8 include:1 remaining:1 graphical:10 unifying:1 calculating:3 joris:1 epsilon:1 ghahramani:1 establish:1 approximating:1 objective:3 diagonal:1 surrogate:11 unclear:1 gradient:15 parametrized:3 polytope:6 considers:1 spanning:1 enforcing:2 willsky:1 code:1 equivalently:1 difficult:1 unfortunately:1 liang:1 expense:1 negative:1 stated:2 design:1 implementation:2 perform:3 teh:2 upper:2 observation:1 datasets:1 markov:1 finite:3 extended:1 misspecification:1 dc:1 perturbation:8 bert:1 arbitrary:2 august:1 david:1 namely:2 required:1 pair:2 rad:2 nip:1 justin:3 suggested:1 able:1 usually:1 below:3 pattern:1 beating:1 max:11 memory:2 including:1 belief:17 wainwright:3 convexifying:1 force:1 indicator:2 buhmann:1 pletscher:1 scheme:3 technology:1 library:1 martial:2 created:2 naive:1 nir:1 review:1 python:1 mooij:1 nicol:1 asymptotic:2 loss:33 fully:2 par:1 limitation:1 degree:1 affine:1 consistent:1 principle:1 intractability:1 storing:1 translation:1 changed:1 free:7 hebert:2 infeasible:2 soon:1 pseudolikelihood:1 institute:1 differentiating:1 sparse:4 distributed:1 ghz:1 benefit:1 calculated:2 valid:2 pert:2 tamir:1 concavity:1 author:1 far:2 welling:1 transaction:2 approximate:19 obtains:1 implicitly:2 clique:8 global:1 uai:6 xi:22 discriminative:3 iterative:4 table:2 learn:2 bethe:14 alg:3 complex:2 constructing:1 da:3 jaimovich:1 aistats:2 main:1 kees:1 noise:1 fig:3 strengthened:1 andrei:2 x300:1 obeying:1 exponential:2 lq:2 comput:1 tied:4 jacobian:1 theorem:6 specific:2 dominates:1 dl:24 exists:2 restricting:1 sequential:1 effectively:1 adding:1 ci:6 conditioned:2 disregarded:1 entropy:30 simply:1 univariate:14 appearance:2 desire:1 tracking:1 applies:2 conditional:8 goal:1 change:1 experimentally:1 included:1 specifically:2 uniformly:1 domke:4 bbp:10 lemma:4 denoising:1 total:1 olga:1 invariance:1 rit:1 select:2 mark:1 latter:1 jonathan:1 accelerated:2 tested:1 avoiding:1 |
3,432 | 4,108 | Tree-Structured Stick Breaking for Hierarchical Data
Ryan Prescott Adams?
Dept. of Computer Science
University of Toronto
Zoubin Ghahramani
Dept. of Engineering
University of Cambridge
Michael I. Jordan
Depts. of EECS and Statistics
University of California, Berkeley
Abstract
Many data are naturally modeled by an unobserved hierarchical structure. In this
paper we propose a flexible nonparametric prior over unknown data hierarchies.
The approach uses nested stick-breaking processes to allow for trees of unbounded
width and depth, where data can live at any node and are infinitely exchangeable.
One can view our model as providing infinite mixtures where the components
have a dependency structure corresponding to an evolutionary diffusion down a
tree. By using a stick-breaking approach, we can apply Markov chain Monte Carlo
methods based on slice sampling to perform Bayesian inference and simulate from
the posterior distribution on trees. We apply our method to hierarchical clustering
of images and topic modeling of text data.
1
Introduction
Structural aspects of models are often critical to obtaining flexible, expressive model families. In
many cases, however, the structure is unobserved and must be inferred, either as an end in itself
or to assist in other estimation and prediction tasks. This paper addresses an important instance
of the structure learning problem: the case when the data arise from a latent hierarchy. We take a
direct nonparametric Bayesian approach, constructing a prior on tree-structured partitions of data that
provides for unbounded width and depth while still allowing tractable posterior inference.
Probabilistic approaches to latent hierarchies have been explored in a variety of domains. Unsupervised learning of densities and nested mixtures has received particular attention via finite-depth
trees [1], diffusive branching processes [2] and hierarchical clustering [3, 4]. Bayesian approaches to
learning latent hierarchies have also been useful for semi-supervised learning [5], relational learning
[6] and multi-task learning [7]. In the vision community, distributions over trees have been useful as
priors for figure motion [8] and for discovering visual taxonomies [9].
In this paper we develop a distribution over probability measures that imbues them with a natural
hierarchy. These hierarchies have unbounded width and depth and the data may live at internal nodes
on the tree. As the process is defined in terms of a distribution over probability measures and not as a
distribution over data per se, data from this model are infinitely exchangeable; the probability of any
set of data is not dependent on its ordering. Unlike other infinitely exchangeable models [2, 4], a
pseudo-time process is not required to describe the distribution on trees and it can be understood in
terms of other popular Bayesian nonparametric models.
Our new approach allows the components of an infinite mixture model to be interpreted as part of
a diffusive evolutionary process. Such a process captures the natural structure of many data. For
example, some scientific papers are considered seminal ? they spawn new areas of research and
cause new papers to be written. We might expect that within a text corpus of scientific documents,
such papers would be the natural ancestors of more specialized papers that followed on from the new
ideas. This motivates two desirable features of a distribution over hierarchies: 1) ancestor data (the
?
http://www.cs.toronto.edu/?rpa/
1
(a) Dirichlet process stick breaking
(b) Tree-structured stick breaking
Figure 1: a) Dirichlet process stick-breaking procedure, with a linear partitioning. b) Interleaving two stickbreaking processes yields a tree-structured partition. Rows 1, 3 and 5 are ?-breaks. Rows 2 and 4 are ?-breaks.
?prototypes?) should be able to live at internal nodes in the tree, and 2) as the ancestor/descendant
relationships are not known a priori, the data should be infinitely exchangeable.
2
A Tree-Structured Stick-Breaking Process
Stick-breaking processes based on the beta distribution have played a prominent role in the development of Bayesian nonparametric methods, most significantly with the constructive approach to the
Dirichlet process (DP) due to Sethuraman [10]. A random probability measure G can be drawn from
a DP with base measure ?H using a sequence of beta variates via:
G=
?
X
?i ??i
?i = ? i
i?1
Y
(1 ? ?i0 )
?i ? H
?i ? Be(1, ?)
?1 = ?1 .
(1)
i0 =1
i=1
We can view this as taking a stick of unit length and breaking it at a random location. We call the
left side of the stick ?1 and then break the right side at a new place, calling the left side of this new
break ?2 . If we continue this process of ?keep the left piece and break the right piece again? as in
Fig. 1a, assigning each ?i a random value drawn from H, we can view this is a random probability
measure centered on H. The distribution over the sequence (?1 , ?2 , ? ? ? ) is a case of the GEM
distribution [11], which also includes the Pitman-Yor process [12]. Note that in Eq. (1) the ?i are i.i.d.
from H; in the current paper these parameters will be drawn according to a hierarchical process.
The GEM construction provides a distribution over infinite partitions of the unit interval, with natural
numbers as the index set as in Fig. 1a. In this paper, we extend this idea to create a distribution over
infinite partitions that also possess a hierarchical graph topology. To do this, we will use finite-length
sequences of natural numbers as our index set on the partitions. Borrowing notation from the P?olya
tree (PT) construction [13], let = (1 , 2 , ? ? ? , K ), denote a length-K sequence of positive integers,
i.e., k ? N+ . We denote the zero-length string as = ? and use || to indicate the length of ?s
sequence. These strings will index the nodes in the tree and || will then be the depth of node .
We interleave two stick-breaking procedures as in Fig. 1b. The first has beta variates ? ? Be(1, ?(||))
which determine the size of a given node?s partition as a function of depth. The second has beta
variates ? ? Be(1, ?), which determine the branching probabilities. Interleaving these processes
partitions the unit interval. The size of the partition associated with each is given by
? = ? ?
Y
? (1 ? ? )
0
0
?i = ?i
0 ?
Y
i ?1
(1 ? ?j )
?? = ? ? ,
(2)
j=1
where i denotes the sequence that results from appending i onto the end of , and 0 ? indicates
that could be constructed by appending onto 0 . When viewing these strings as identifying nodes on
a tree, {i : i ? 1, 2, ? ? ? } are the children of and {0 : 0 ? } are the ancestors of . The {? } in
Eq. (2) can be seen as products of several decisions on how to allocate mass to nodes and branches in
the tree: the {? } determine the probability of a particular sequence of children and the ? and (1?? )
terms determine the proportion of mass allotted to versus nodes that are descendants of .
2
(a) ?0 = 1, ? = 12 , ? = 15
(e) ?0 = 5, ? = 1, ? = 15
(b) ?0 = 1, ? = 1, ? = 51
(f) ?0 = 5, ? = 12 , ? = 1
(c) ?0 = 1, ? = 1, ? = 1
(d) ?0 = 5, ? = 21 , ? = 51
(g) ?0 = 25, ? = 21 , ? = 51
(h) ?0 = 25, ? = 12 , ? = 1
Figure 2: Eight samples of trees over partitions of fifty data, with different hyperparameter settings. The circles
are represented nodes, and the squares are the data. Note that some of the sampled trees have represented nodes
with no data associated with them and that the branch ordering does not correspond to a size-biased permutation.
We require that the {? } sum to one. The ?-sticks have
no effect upon this, but ?(?) : N ? R+ (the
P?
depth-varying parameter for the ?-sticks) must satisfy j=1 ln(1+1/?(j ?1)) = +? (see [14]). This
is clearly true for ?(j) = ?0 > 0. A useful function that also satisfies this condition is ?(j) = ?j ?0
with ?0 > 0, ? ? (0, 1]. The decay parameter ? allows a distribution over trees with most of the mass
at an intermediate depth. This is the ?(?) we will assume throughout the remainder of the paper.
An Urn-based View
When a Bayesian nonparametric model induces partitions over data, it is sometimes possible to
construct a Blackwell-MacQueen [15] type urn scheme that corresponds to sequentially generating
data, while integrating out the underlying random measure. The ?Chinese restaurant? metaphor for
the Dirichlet process is a popular example. In our model, we can use such an urn scheme to construct
a treed partition over a finite set of data.
The urn process can be seen as a path-reinforcing Bernoulli trip down the tree where each datum starts
at the root and descends into children until it stops at some node. The first datum lives at the root node
with probability 1/(?(0)+1), otherwise it descends and instantiates a new child. It stays at this new
child with probability 1/(?(1)+1) or descends again and so on. A later datum stays at node with
probability (N +1)/(N +N?? +?(||)+1), where N is the number of previous data that stopped
at , and N?? is the number of previous data
P that came down this path of the tree but did not stop at ,
i.e., a sum over all descendants: N?? = ?0 N0 . If a datum descends to but does not stop then
it chooses which child to descend to according to a Chinese restaurant process where the previous
customers are only those data who have also descended to this point. That is, if it has reached node
but will not stay there, it descends to existing child i with probability (Ni +Ni ?? )/(N?? +?)
and instantiates a new child with probability ?/(N?? +?). A particular path therefore becomes more
likely according to its ?popularity? with previous data. Note that a node can be a part of a popular
path without having any data of its own. Fig. 2 shows the structures over fifty data drawn from this
process with different hyperparameter settings. Note that the branch ordering in a realization of the
urn scheme will not necessarily be the same as that of the size-biased ordering [16] of the partitions
in Fig. 1b: the former is a tree over a finite set of data and the latter is over a random infinite partition.
The urn view allows us to compare this model to other priors on infinite trees. One contribution of this
model is that the data can live at internal nodes in the tree, but are nevertheless infinitely exchangeable.
This is in contrast to the model in [8], for example, which is not infinitely exchangeable. The nested
Chinese restaurant process (nCRP) [17] provides a distribution over trees of unbounded width and
depth, but data correspond to paths of infinite length, requiring an additional distribution over depths
that is not path-dependent. The P?olya tree [13] uses a recursive stick-breaking process to specify a
distribution over nested partitions in a binary tree, however the data live at infinitely-deep leaf nodes.
The marginal distribution on the topology of a Dirichlet diffusion tree [2] (and the clustering variant
of Kingman?s coalescent [4]) provides path-reinforcement and infinite exchangeability, however it
requires a pseudo-time hazard process and data do not live at internal nodes.
3
3
Hierarchical Priors for Node Parameters
One can view the stick-breaking construction of the Dirichlet process as generating an infinite
partition and then labeling each cell i with parameter ?i drawn i.i.d. from H. In a mixture model, data
from the ith component are generated independently according to a distribution f (x | ?i ), where x
takes values in a sample space X . In our model, we continue to assume that the data are generated
independently given the latent labeling, but to take advantage of the tree-structured partitioning of
Section 2 an i.i.d. assumption on the node parameters is inappropriate. Rather, the distribution over the
parameters at node , denoted ? , should depend in an interesting way on its ancestors {?0 : 0 ? }.
A natural way to specify such dependency is via a directed graphical model, with the requirement
that edges must always point down the tree. An intuitive subclass of such graphical models are
those in which a child is conditionally independent of all ancestors, given its parents and any global
hyperparameters. This is the case we will focus on here, as it provides a useful view of the parametergeneration process as a ?diffusion down the tree? via a Markov transition kernel that can be essentially
any distribution with a location parameter. Coupling such a kernel, which we denote T (?i ? ? ),
with a root-level prior p(?? ) and the node-wise data distribution f (x | ? ), we have a complete model
for infinitely exchangeable tree-structured data on X . We now examine a few specific examples.
Generalized Gaussian Diffusions If our data distribution f (x | ?) is such that the parameters can
be specified as a real-valued vector ? ? RM , then we can use a Gaussian distribution to describe the
parent-to-child transition kernel: Tnorm (?i ? ? ) = N (?i | ? ? , ?), where ? ? [0, 1). Such a kernel
captures the simple idea that the child?s parameters are noisy versions of the parent?s, as specified
by the covariance matrix ?, while ? ensures that all parameters in the tree have a finite marginal
variance. While this will not result in a conjugate model unless the data are themselves Gaussian, it
has the simple property that each node?s parameter has a Gaussian prior that is specified by its parent.
We present an application of this model in Section 5, where we model images as a distribution over
binary vectors obtained by transforming a real-valued vector to (0, 1) via the logistic function.
Chained Dirichlet-Multinomial Distributions If each datum is a set of counts over M discrete
outcomes, as in many finite topic models, a multinomial model for f (x | ?) may be appropriate. In this
case, X = NM , and ? takes values in the (M ?1)-simplex. We can construct a parent-to-child transition kernel via a Dirichlet distribution with concentration parameter ?: Tdir (?i ? ? ) = Dir(?? ),
using a symmetric Dirichlet for the root node, i.e., ?? ? Dir(?1).
Hierarchical Dirichlet Processes A very general way to specify the distribution over data is to say
that it is drawn from a random probability measure with a Dirichlet process prior. In our case, one
flexible approach would be to model the data at node with a distribution G as in Eq. (1). This means
that ? ? G where G now corresponds to an infinite set of parameters. The hierarchical Dirichlet
process (HDP) [18] provides a natural parent-to-child transition kernel for the tree-structured model,
again with concentration parameter ?: Thdp (Gi ? G ) = DP(?G ). At the top level, we specify a
global base measure H for the root node, i.e., G? ? H. One negative aspect of this transition kernel is
that the G will have a tendency to collapse down onto a single atom. One remedy is to smooth the
kernel with ? as in the Gaussian case, i.e., Thdp (Gi ? G ) = DP(? (? G + (1 ? ?) H)).
4
Inference via Markov chain Monte Carlo
We have so far defined a model for data that are generated from the parameters associated with the
nodes of a random tree. Having seen N data and assuming a model f (x | ? ) as in the previous
section, we wish to infer possible trees and model parameters. As in most complex probabilistic
models, closed form inference is impossible and we instead generate posterior samples via Markov
chain Monte Carlo (MCMC). To operate efficiently over a variety of regimes without tuning, we use
slice sampling [19] extensively. This allows us to sample from the true posterior distribution over
the finite quantities of interest despite our model containing an infinite number of parameters. The
primary data structure in our Markov chain is the set of N strings describing the current assignments
of data to nodes, which we denote {n }N
n=1 . We represent the ?-sticks and parameters ? for all nodes
that are traversed by the data in its current assignments, i.e., {? , ? : ?n, ? n }. We also represent
all ?-sticks in the ?hull? of the tree that contains the data: if at some node
one of the N data paths
S S
passes through child i , then we represent all the ?-sticks in the set n i n {?j : j ? i }.
4
function SAMP - ASSIGNMENT(n)
pslice ? Uni(0, f (xn | ?n ))
umin ? 0, umax ? 1
loop
u ? Uni(umin , umax )
? FIND - NODE(u, ?)
p ? f (xn | ? )
if p > pslice then return
else if < n then umin ? u
else umax ? u
function FIND - NODE(u, )
if u < ? then return
else
u ? (u ? ? )/(1
Q ? ? )
while u < 1? j (1??j ) do
Draw a new ?-stick
e ?edges from ?-sticks
i ?bin index for u from edges
Draw ?i and ?i if necessary
u ? (u ? ei )/(ei+1 ? ei )
return FIND - NODE(u, i )
function SIZE - BIASED - PERM()
???
while represented children do
w ? weights from {?i }
w ? w\?
j?w
? ? append j
return ?
Slice Sampling Data Assignments The primary challenge in inference with Bayesian nonparametric mixture models is often sampling from the posterior distribution over assignments, as it is
frequently difficult to integrate over the infinity of unrepresented components. To avoid this difficulty,
we use a slice sampling approach that can be viewed as a combination of the Dirichlet slice sampler
of Walker [20] and the retrospective sampler of Papaspiliopolous and Roberts [21].
Section 2 described a path-reinforcing process for generating data from the model. An alternative
method is to draw a uniform variate u on (0, 1) and break sticks until we know what ? the u fell into.
One can imagine throwing a dart at the top of Fig. 1b and considering which ? it hits. We would
draw the sticks and parameters from the prior, as needed, conditioning on the state instantiated from
any previous draws and with parent-to-child transitions enforcing the prior downwards in the tree.
The pseudocode function FIND - NODE(u, ) with u ? Uni(0, 1) and = ? draws such a sample. This
representation leads to a slice sampling scheme on u that does not require any tuning parameters.
To slice sample the assignment of the nth datum, currently assigned to n , we initialize our slice
sampling bounds to (0, 1). We draw a new u from the bounds and use the FIND - NODE function to
determine the associated from the currently-represented state, plus any additional state that must
be drawn from the prior. We do a lexical comparison (?string-like?) of the new and our current
state n , to determine whether this new path corresponds to a u that is ?above? or ?below? our current
state. This lexical comparison prevents us from having to represent the initial un . We shrink the slice
sampling bounds appropriately, depending on the comparison, until we find a u that satisfies the slice.
This procedure is given in pseudocode as SAMP - ASSIGNMENT(n). After performing this procedure,
we can discard any state that is not in the previously-mentioned hull of representation.
Gibbs Sampling Stick Lengths Given the represented sticks and the current assignments of nodes
to data, it is straightforward to resample the lengths of the sticks from the posterior beta distributions
P
? | data ? Be(N +1, N?? +?(||))
?i | data ? Be(Ni ?? +1, ? + j>i Nj ?? ),
where N and N?? are the path-based counts as described in Section 2.
Gibbs Sampling the Ordering of the ?-Sticks When using the stick-breaking representation of
the Dirichlet process, it is crucial for mixing to sample over possible orderings of the sticks. In our
model, we include such moves on the ?-sticks. We iterate over each instantiated node and perform
a Gibbs update of the ordering of its immediate children using its invariance under size-biased
permutation (SBP) [16]. For a given node, the ?-sticks provide a ?local? set of weights that sum to
one. We repeatedly draw without replacement from the discrete distribution implied by the weights
and keep the ordering that results. Pitman [16] showed that distributions over sequences such as
our ?-sticks are invariant under such permutations and we can view the SIZE - BIASED - PERM()
procedure as a Metropolis?Hastings proposal with an acceptance ratio that is always one.
Slice Sampling Stick-Breaking Hyperparameters Given all of the instantiated sticks, we slice
sample from the conditional posterior distribution over the hyperparameters ?0 , ? and ?:
Y
p(?0 , ? | {? }) ? I(?0min < ?0 < ?0max )I(?min < ? < ?max )
Be(? | 1, ?|| ?0 )
p(? | {? }) ? I(?
min
<? <?
max
)
Y
Be(? | 1, ?),
where the products are over nodes in the aforementioned hull. We initialize the bounds of the slice
sampler with the bounds of the top-hat prior.
5
Figure 3: These figures show a subset of the tree learned from the 50,000 CIFAR-100 images. The top tree only
shows nodes for which there were at least 250 images. The ten shown at each node are those with the highest
probability under the node?s distribution. The second row shows three expanded views of subtrees, with nodes
that have at least 50 images. Detailed views of portions of these subtrees are shown in the third row.
Selecting a Single Tree We have so far described a procedure for generating posterior samples
from the tree structures and associated stick-breaking processes. If our objective is to find a single
tree, however, samples from the posterior distribution are unsatisfying. Following [17], we report a
best single tree structure over the data by choosing the sample from our Markov chain that has the
highest complete-data likelihood p({xn , n }N
n=1 | {? }, {? }, ?0 , ?, ?).
5
Hierarchical Clustering of Images
We applied our model and MCMC inference to the problem of hierarchically clustering the CIFAR100 image data set 1 . These data are a labeled subset of the 80 million tiny images data [22]
with 50,000 32?32 color images. We did not use the labels in our clustering. We modeled the
images via 256-dimensional binary features that had been previously extracted from each image
(i.e., xn ? {0, 1}256 ) using a deep neural network that had been trained for an image retrieval task
[23]. We used a factored Bernoulli likelihood at each node, parameterized by a latent 256-dimensional
real vector (i.e., ? ? R256 ) that was transformed component-wise via the logistic function:
256
?x(d)
1?x(d)
Y
n
n
f (xn | ? ) =
1 + exp{??(d) }
1 + exp{?(d) }
.
d=1
The prior over the parameters of a child node was Gaussian with its parent?s value as the mean.
The covariance of the prior (? in Section 3) was diagonal and inferred as part of the Markov chain.
We placed independent Uni(0.01, 1) priors on the elements of the diagonal. To efficiently learn the
node parameters, we used Hamiltonian (hybrid) Monte Carlo (HMC) [24], taking 25 leapfrog HMC
steps, with a randomized step size. We occasionally interleaved a slice sampling move for robustness.
1
http://www.cs.utoronto.ca/?kriz/cifar.html
6
neural
chip
figure
input
network
Koch
Murray
Lazzaro
Harris
Cauwenberghs
neurons
model
neuron
input
spike
Koch
Zador
Bower
Sejnowski
Brown
model
visual
cells
figure
orientation
Obermayer
Koch
Pouget
Sejnowski
Schulten
state
learning
policy
function
time
Singh
Sutton
Barto
Tsitsiklis
Moore
network
units
learning
hidden
input
Hinton
Giles
Fahlman
Kamimura
Baum
data
model
gaussian
distribution
algorithm
Bishop
Tresp
Williams
Ghahramani
Barber
learning
model
time
state
control
time
signal
network
neural
figure
image
network
images
recognition
object
network
neural
learning
networks
time
Giles
Toomarian
Mozer
Zemel
Kabashima
Dayan
Thrun
Singh
Barto
Moore
function
networks
neural
functions
network
Kowalczyk
Warmuth
Bartlett
Williamson
Meir
Sejnowski
Bialek
Makeig
Jung
Principe
network
input
neural
learning
networks
Mozer
Wiles
Giles
Sun
Pollack
Sejnowski
Becker
Baluja
Zemel
Mozer
set
algorithm
data
training
vector
Scholkopf
Smola
Vapnik
Shawe-Taylor
Bartlett
Figure 4: A subtree of documents from NIPS 1-12, inferred using 20 topics. Only nodes with at least 50
documents are shown. Each node shows three aggregated statistics at that node: the five most common author
names, the five most common words and a histogram over the years of proceedings.
For the stick-breaking processes, we used ?0 ? Uni(10, 50), ? ? Uni(0.05, 0.8), and ? ? Uni(1, 10).
Using Python on a single core of a modern workstation each MCMC iteration of the entire model
(including slice sampled reassignment of all 50,000 images) requires approximately three minutes.
Fig. 3 represents a part of the tree with the best complete-data log likelihood after 4000 such iterations.
The tree provides a useful visualization of the data set, capturing broad variations in color at the
higher levels of the tree, with lower branches varying in texture and shape. A larger version of this
tree is provided in the supplementary material.
6
Hierarchical Modeling of Document Topics
We also used our approach in a bag-of-words topic model, applying it to 1740 papers from NIPS
1?12 2 . As in latent Dirichlet allocation (LDA) [25], we consider a topic to be a distribution over
words and each document to be described by a distribution over topics. In LDA, each document has a
unique topic distribution. In our model, however, each document lives at a node and that node has a
unique topic distribution. Thus multiple documents share a distribution over topics if they inhabit the
same node. Each node?s topic distribution is from a chained Dirichlet-multinomial as described in
Section 3. The topics each have symmetric Dirichlet priors over their word distributions. This results
in a different kind of topic model than that provided by the nested Chinese restaurant process. In the
nCRP, each node corresponds to a topic and documents are spread across infinitely-long paths down
the tree. Each word is drawn from a distribution over depths that is given by a GEM distribution. In
the nCRP, it is not the documents that have the hierarchy, but the topics.
We did two kinds of analyses. The first is a visualization as with the image data of the previous
section, using all 1740 documents. The subtree in Fig. 4 shows the nodes that had at least fifty
documents, along with the most common authors and words at that node. The normalized histogram
in each box shows which of the twelve years are represented among the documents in that node. An
2
http://cs.nyu.edu/?roweis/data.html
7
80
90
(a) Improvement versus multinomial, by number of topics
1
2
3
4
5
6
7
8
9
10
2200
2000
100
30
20
50
60
70
Number of Topics
40
30
40
40
30
30
30
20
20
40
20
400 10
40
40
450
2400
40
30
500
2600
40
10
550
2800
30
20
600
3000
40
20
650
Multinomial
LDA
TSSB
3200
Best Perplexity Per Word (nats)
Perplexity Improvement Over Multinomial (nats)
LDA
TSSB
700
Folds
(b) Best perplexity per word, by folds
Figure 5: Results of predictive performance comparison between latent Dirichlet allocation (LDA) and
tree-structured stick breaking (TSSB). a) Mean improvement in perplexity per word over Laplace-smoothed
multinomial, as a function of topics (larger is better). The error bars show the standard deviation of the
improvement across the ten folds. b) Best predictive perplexity per word for each fold (smaller is better). The
numbers above the LDA and TSSB bars show how many topics were used to achieve this.
expanded version of this tree is provided in the supplementary material. Secondly, we quantitatively
assessed the predictive performance of the model. We created ten random partitions of the NIPS
corpus into 1200 training and 540 test documents. We then performed inference with different
numbers of topics (10, 20, . . . , 100) and evaluated the predictive perplexity of the held-out data
using an empirical likelihood estimate taken from a mixture of multinomials (pseudo-documents of
infinite length, see, e.g. [26]) with 100,000 components. As Fig. 5a shows, our model improves in
performance over standard LDA for smaller numbers of topics. This improvement appears to be
due to the constraints on possible topic distributions that are imposed by the diffusion. For larger
numbers of topics, however, it may be that these constraints become a hindrance and the model may
be allocating predictive mass to regions where it is not warranted. In absolute terms, more topics did
not appear to improve predictive performance for LDA or the tree-structured model. Both models
performed best with fewer than fifty topics and the best tree model outperformed the best LDA model
on all folds, as shown in Fig. 5b.
The MCMC inference procedure we used to train our model was as follows: first, we ran Gibbs
sampling of a standard LDA topic model for 1000 iterations. We then burned in the tree inference
for 500 iterations with fixed word-topic associations. We then allowed the word-topic associations
to vary and burned in for an additional 500 iterations, before drawing 5000 samples from the full
posterior. For the comparison, we burned in LDA for 1000 iterations and then drew 5000 samples
from the posterior [27]. For both models we thinned the samples by a factor of 50. The mixing of the
topic model seems to be somewhat sensitive to the initialization of the ? parameter in the chained
Dirichlet-multinomial and we initialized this parameter to be the same as the number of topics.
7
Discussion
We have presented a model for a distribution over random measures that also constructs a hierarchy,
with the goal of constructing a general-purpose prior on tree-structured data. Our approach is novel in
that it combines infinite exchangeability with a representation that allows data to live at internal nodes
on the tree, without a hazard rate process. We have developed a practical inference approach based
on Markov chain Monte Carlo and demonstrated it on two real-world data sets in different domains.
The imposition of structure on the parameters of an infinite mixture model is an increasingly important
topic. In this light, our notion of evolutionary diffusion down a tree sits within the larger class of
models that construct dependencies between distributions on random measures [28, 29, 18].
Acknowledgements
The authors wish to thank Alex Krizhevsky for providing the image feature data. We also thank Kurt
Miller, Iain Murray, Hanna Wallach, and Sinead Williamson for valuable discussions, and Yee Whye
Teh for suggesting Gibbs moves based on size-biased permutation. RPA is a Junior Fellow of the
Canadian Institute for Advanced Reserch.
8
References
[1] Christopher K. I. Williams. A MCMC approach to hierarchical mixture modelling. In Advances in Neural
Information Processing Systems 12, pages 680?686. 2000.
[2] Radford M. Neal. Density modeling and clustering using Dirichlet diffusion trees. In Bayesian Statistics
7, pages 619?629, 2003.
[3] Katherine A. Heller and Zoubin Ghahramani. Bayesian hierarchical clustering. In Proceedings of the
22nd International Conference on Machine Learning, 2005.
[4] Yee Whye Teh, Hal Daum?e III, and Daniel Roy. Bayesian agglomerative clustering with coalescents. In
Advances in Neural Information Processing Systems 20, 2007.
[5] Charles Kemp, Thomas L. Griffiths, Sean Stromsten, and Joshua B. Tenenbaum. Semi-supervised learning
with trees. In Advances in Neural Information Processing Systems 16. 2004.
[6] Daniel M. Roy, Charles Kemp, Vikash K. Mansinghka, and Joshua B. Tenenbaum. Learning annotated
hierarchies from relational data. In Advances in Neural Information Processing Systems 19, 2007.
[7] Hal Daum?e III. Bayesian multitask learning with latent hierarchies. In Proceedings of the 25th Conference
on Uncertainty in Artificial Intelligence, 2009.
[8] Edward Meeds, David A. Ross, Richard S. Zemel, and Sam T. Roweis. Learning stick-figure models
using nonparametric Bayesian priors over trees. In IEEE Conference on Computer Vision and Pattern
Recognition, 2008.
[9] Evgeniy Bart, Ian Porteous, Pietro Perona, and Max Welling. Unsupervised learning of visual taxonomies.
In IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[10] Jayaram Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[11] Jim Pitman. Poisson?Dirichlet and GEM invariant distributions for split-and-merge transformation of an
interval partition. Combinatorics, Probability and Computing, 11:501?514, 2002.
[12] Jim Pitman and Marc Yor. The two-parameter Poisson?Dirichlet distribution derived from a stable
subordinator. The Annals of Probability, 25(2):855?900, 1997.
[13] R. Daniel Mauldin, William D. Sudderth, and S. C. Williams. P?olya trees and random distributions. The
Annals of Statistics, 20(3):1203?1221, September 1992.
[14] Hemant Ishwaran and Lancelot F. James. Gibbs sampling methods for stick-breaking priors. Journal of
the American Statistical Association, 96(453):161?173, March 2001.
[15] David Blackwell and James B. MacQueen. Ferguson distributions via P?olya urn schemes. Annals of
Statistics, 1(2):353?355, 1973.
[16] Jim Pitman. Random discrete distributions invariant under size-biased permutation. Advances in Applied
Probability, 28(2):525?539, 1996.
[17] David M. Blei, Thomas L. Griffiths, and Michael I. Jordan. The nested Chinese restaurant process and
Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57(2):1?30, 2010.
[18] Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet processes.
Journal of the American Statistical Association, 101(476):1566?1581, 2006.
[19] Radford M. Neal. Slice sampling (with discussion). The Annals of Statistics, 31(3):705?767, 2003.
[20] Stephen G. Walker. Sampling the Dirichlet mixture model with slices. Communications in Statistics,
36:45?54, 2007.
[21] Omiros Papaspiliopoulos and Gareth O. Roberts. Retrospective Markov chain Monte Carlo methods for
Dirichlet process hierarchical models. Biometrika, 95(1):169?186, 2008.
[22] Antonio Torralba, Rob Fergus, and William T. Freeman. 80 million tiny images: A large data set
for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 30(11):1958?1970, 2008.
[23] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Department of
Computer Science, University of Toronto, 2009.
[24] Radford M. Neal. MCMC using Hamiltonian dynamics. In Handbook of Markov chain Monte Carlo.
Chapman and Hall / CRC Press.
[25] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[26] Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. Evaluation methods for topic
models. In Proceedings of the 26th International Conference on Machine Learning, 2009.
[27] Tom L. Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the National Academy of
Sciences of the United States of America, 101(Suppl. 1):5228?5235, 2004.
[28] Steven N. MacEachern. Dependent nonparametric processes. In Proceedings of the Section on Bayesian
Statistical Science, 1999.
[29] Steven N. MacEachern, Athanasios Kottas, and Alan E. Gelfand. Spatial nonparametric Bayesian models.
Technical Report 01-10, Institute of Statistics and Decision Sciences, Duke University, 2001.
9
| 4108 |@word multitask:1 version:3 interleave:1 proportion:1 seems:1 nd:1 covariance:2 initial:1 contains:1 selecting:1 united:1 daniel:3 document:15 kurt:1 existing:1 current:6 assigning:1 must:4 written:1 kamimura:1 partition:17 shape:1 update:1 bart:1 intelligence:2 discovering:1 leaf:1 fewer:1 warmuth:1 ith:1 hamiltonian:2 core:1 blei:3 provides:7 node:59 toronto:3 location:2 sits:1 treed:1 five:2 unbounded:4 along:1 constructed:1 direct:1 beta:5 become:1 scholkopf:1 descendant:3 combine:1 thinned:1 themselves:1 olya:4 examine:1 multi:1 frequently:1 freeman:1 salakhutdinov:1 metaphor:1 inappropriate:1 considering:1 becomes:1 provided:3 notation:1 underlying:1 toomarian:1 mass:4 what:1 kind:2 interpreted:1 string:5 developed:1 unobserved:2 transformation:1 finding:1 pseudo:3 berkeley:1 fellow:1 subclass:1 makeig:1 rm:1 hit:1 stick:38 exchangeable:7 partitioning:2 unit:4 control:1 appear:1 biometrika:1 positive:1 before:1 engineering:1 understood:1 local:1 hemant:1 despite:1 sutton:1 path:12 approximately:1 merge:1 might:1 plus:1 initialization:1 wallach:2 collapse:1 directed:1 unique:2 practical:1 recursive:1 procedure:7 descended:1 lancelot:1 area:1 empirical:1 significantly:1 word:12 integrating:1 prescott:1 griffith:3 zoubin:2 onto:3 live:7 seminal:1 impossible:1 applying:1 www:2 yee:3 imposed:1 customer:1 lexical:2 baum:1 demonstrated:1 straightforward:1 attention:1 zador:1 independently:2 williams:3 identifying:1 pouget:1 factored:1 iain:2 steyvers:1 notion:1 variation:1 cifar100:1 laplace:1 annals:4 construction:3 hierarchy:12 pt:1 imagine:1 duke:1 us:2 element:1 roy:2 recognition:4 sbp:1 labeled:1 role:1 steven:2 capture:2 descend:1 region:1 ensures:1 sun:1 ordering:8 highest:2 valuable:1 ran:1 mentioned:1 mozer:3 transforming:1 nats:2 dynamic:1 chained:3 trained:1 depend:1 singh:2 predictive:6 upon:1 meed:1 chip:1 represented:6 america:1 train:1 instantiated:3 describe:2 monte:7 sejnowski:4 artificial:1 zemel:3 labeling:2 outcome:1 choosing:1 gelfand:1 larger:4 valued:2 supplementary:2 say:1 drawing:1 otherwise:1 coalescents:1 statistic:8 itself:1 noisy:1 beal:1 sequence:8 advantage:1 propose:1 product:2 remainder:1 loop:1 realization:1 mixing:2 achieve:1 roweis:2 academy:1 intuitive:1 parent:8 requirement:1 generating:4 adam:1 object:2 coupling:1 develop:1 depending:1 andrew:1 mansinghka:1 received:1 eq:3 edward:1 descends:5 c:3 indicate:1 annotated:1 hull:3 centered:1 coalescent:1 viewing:1 material:2 bin:1 crc:1 require:2 ryan:1 secondly:1 traversed:1 koch:3 considered:1 hall:1 exp:2 matthew:1 vary:1 torralba:1 resample:1 purpose:1 estimation:1 ruslan:1 outperformed:1 bag:1 label:1 currently:2 ross:1 stickbreaking:1 sensitive:1 create:1 clearly:1 always:2 gaussian:7 rather:1 avoid:1 exchangeability:2 varying:2 barto:2 derived:1 focus:1 leapfrog:1 improvement:5 bernoulli:2 indicates:1 likelihood:4 modelling:1 contrast:1 inference:11 dependent:3 dayan:1 i0:2 ferguson:1 entire:1 borrowing:1 hidden:1 ancestor:6 perona:1 transformed:1 rpa:2 aforementioned:1 flexible:3 html:2 denoted:1 priori:1 among:1 development:1 orientation:1 spatial:1 initialize:2 marginal:2 construct:5 having:3 ng:1 sampling:16 atom:1 chapman:1 represents:1 broad:1 unsupervised:2 simplex:1 report:3 quantitatively:1 richard:1 few:1 modern:1 national:1 replacement:1 william:2 interest:1 acceptance:1 evaluation:1 mixture:9 light:1 held:1 chain:9 ncrp:3 subtrees:2 allocating:1 edge:3 necessary:1 unless:1 tree:62 taylor:1 initialized:1 circle:1 pollack:1 stopped:1 instance:1 modeling:3 giles:3 reserch:1 assignment:8 deviation:1 subset:2 uniform:1 krizhevsky:2 dependency:3 eec:1 dir:2 chooses:1 density:2 twelve:1 randomized:1 international:2 stay:3 probabilistic:2 michael:4 again:3 nm:1 containing:1 american:2 kingman:1 return:4 suggesting:1 includes:1 satisfy:1 unsatisfying:1 combinatorics:1 kottas:1 piece:2 later:1 view:10 break:6 root:5 closed:1 performed:2 reached:1 start:1 portion:1 cauwenberghs:1 samp:2 contribution:1 square:1 variance:1 who:1 efficiently:2 miller:1 yield:1 correspond:2 bayesian:15 carlo:7 kabashima:1 definition:1 james:2 naturally:1 associated:5 workstation:1 sampled:2 stop:3 popular:3 sinead:1 color:2 improves:1 sean:1 athanasios:1 appears:1 higher:1 supervised:2 tom:1 specify:4 evaluated:1 shrink:1 box:1 smola:1 until:3 hastings:1 hindrance:1 expressive:1 ei:3 christopher:1 logistic:2 lda:11 scientific:3 hal:2 name:1 effect:1 requiring:1 true:2 remedy:1 brown:1 former:1 normalized:1 assigned:1 symmetric:2 moore:2 neal:3 evgeniy:1 conditionally:1 width:4 branching:2 subordinator:1 kriz:1 reassignment:1 generalized:1 prominent:1 whye:3 complete:3 motion:1 image:19 wise:2 novel:1 charles:2 common:3 specialized:1 pseudocode:2 multinomial:9 conditioning:1 million:2 extend:1 association:4 cambridge:1 gibbs:6 tuning:2 umin:3 shawe:1 had:3 stable:1 base:2 posterior:11 own:1 showed:1 discard:1 perplexity:6 occasionally:1 binary:3 continue:2 came:1 life:2 joshua:2 seen:3 additional:3 somewhat:1 determine:6 aggregated:1 signal:1 semi:2 branch:4 multiple:2 desirable:1 full:1 infer:1 stephen:1 smooth:1 technical:2 alan:1 long:1 hazard:2 cifar:2 retrieval:1 prediction:1 variant:1 vision:3 essentially:1 poisson:2 histogram:2 sometimes:1 kernel:8 represent:4 iteration:6 suppl:1 cell:2 diffusive:2 proposal:1 interval:3 else:3 walker:2 sudderth:1 crucial:1 appropriately:1 fifty:4 biased:7 unlike:1 posse:1 operate:1 pass:1 fell:1 jordan:4 imbues:1 call:1 structural:1 integer:1 intermediate:1 canadian:1 iii:2 split:1 variety:2 iterate:1 variate:4 restaurant:5 topology:2 idea:3 prototype:1 vikash:1 whether:1 allocate:1 bartlett:2 assist:1 becker:1 retrospective:2 reinforcing:2 cause:1 lazzaro:1 repeatedly:1 deep:2 antonio:1 useful:5 se:1 detailed:1 nonparametric:11 extensively:1 ten:3 induces:1 tenenbaum:2 stromsten:1 http:3 generate:1 meir:1 per:5 popularity:1 discrete:3 hyperparameter:2 nevertheless:1 drawn:8 diffusion:7 graph:1 pietro:1 sum:3 unrepresented:1 year:2 imposition:1 parameterized:1 uncertainty:1 place:1 family:1 throughout:1 draw:8 decision:2 interleaved:1 bound:5 capturing:1 layer:1 followed:1 played:1 datum:6 fold:5 infinity:1 throwing:1 constraint:2 alex:2 scene:1 calling:1 aspect:2 simulate:1 min:3 urn:7 performing:1 expanded:2 structured:11 department:1 according:4 combination:1 march:1 instantiates:2 conjugate:1 across:2 smaller:2 increasingly:1 sam:1 metropolis:1 perm:2 rob:1 wile:1 invariant:3 spawn:1 taken:1 ln:1 visualization:2 previously:2 describing:1 count:2 needed:1 know:1 tractable:1 end:2 apply:2 eight:1 hierarchical:15 ishwaran:1 appropriate:1 kowalczyk:1 appending:2 alternative:1 robustness:1 hat:1 thomas:2 denotes:1 clustering:9 dirichlet:27 top:4 include:1 graphical:2 porteous:1 daum:2 ghahramani:3 chinese:5 murray:3 implied:1 move:3 objective:1 quantity:1 spike:1 concentration:2 primary:2 diagonal:2 bialek:1 obermayer:1 evolutionary:3 september:1 dp:4 thank:2 thrun:1 topic:34 barber:1 agglomerative:1 kemp:2 enforcing:1 assuming:1 hdp:1 length:9 modeled:2 relationship:1 index:4 providing:2 ratio:1 difficult:1 hmc:2 katherine:1 robert:2 taxonomy:2 sinica:1 negative:1 append:1 motivates:1 policy:1 unknown:1 perform:2 allowing:1 teh:3 neuron:2 markov:10 macqueen:2 dart:1 finite:7 immediate:1 relational:2 hinton:1 communication:1 jim:3 smoothed:1 community:1 inferred:3 david:6 required:1 blackwell:2 trip:1 specified:3 r256:1 junior:1 california:1 learned:1 nip:3 address:1 able:1 bar:2 below:1 pattern:3 regime:1 challenge:1 max:4 including:1 critical:1 natural:7 difficulty:1 hybrid:1 advanced:1 nth:1 scheme:5 improve:1 sethuraman:2 created:1 umax:3 tresp:1 text:2 prior:20 heller:1 acknowledgement:1 python:1 expect:1 permutation:5 interesting:1 burned:3 allocation:3 versus:2 integrate:1 tiny:3 share:1 row:4 jung:1 placed:1 fahlman:1 tsitsiklis:1 side:3 allow:1 jayaram:1 institute:2 taking:2 absolute:1 pitman:5 yor:2 slice:17 mimno:1 depth:11 xn:5 transition:6 world:1 author:3 reinforcement:1 far:2 welling:1 transaction:1 uni:7 keep:2 global:2 sequentially:1 handbook:1 corpus:2 gem:4 fergus:1 un:1 latent:9 learn:1 ca:1 obtaining:1 hanna:2 williamson:2 warranted:1 necessarily:1 complex:1 constructing:2 domain:2 marc:1 did:4 hierarchically:1 spread:1 statistica:1 arise:1 hyperparameters:3 child:18 allowed:1 fig:10 papaspiliopoulos:1 downwards:1 schulten:1 wish:2 bower:1 breaking:18 third:1 interleaving:2 ian:1 down:8 minute:1 specific:1 bishop:1 utoronto:1 mauldin:1 explored:1 decay:1 nyu:1 vapnik:1 drew:1 texture:1 subtree:2 depts:1 likely:1 infinitely:9 visual:3 prevents:1 omiros:1 radford:3 nested:6 corresponds:4 satisfies:2 extracted:1 harris:1 acm:1 gareth:1 conditional:1 viewed:1 goal:1 infinite:14 baluja:1 tssb:4 sampler:3 invariance:1 tendency:1 allotted:1 principe:1 internal:5 mark:1 maceachern:2 latter:1 assessed:1 constructive:2 dept:2 mcmc:6 |
3,433 | 4,109 | Movement extraction by detecting
dynamics switches and repetitions
Silvia Chiappa
Statistical Laboratory
Wilberforce Road, Cambridge, UK
[email protected]
Jan Peters
Max Planck Institute for Biological Cybernetics
Spemannstrasse 38, Tuebingen, Germany
[email protected]
Abstract
Many time-series such as human movement data consist of a sequence of basic
actions, e.g., forehands and backhands in tennis. Automatically extracting and
characterizing such actions is an important problem for a variety of different applications. In this paper, we present a probabilistic segmentation approach in which
an observed time-series is modeled as a concatenation of segments corresponding
to different basic actions. Each segment is generated through a noisy transformation of one of a few hidden trajectories representing different types of movement,
with possible time re-scaling. We analyze three different approximation methods
for dealing with model intractability, and demonstrate how the proposed approach
can successfully segment table tennis movements recorded using a robot arm as
haptic input device.
1
Introduction
Motion capture systems have become widespread in many application areas such as robotics [18],
physical therapy, sports sciences [10], virtual reality [15], artificial movie generation [13], computer
games [1], etc. These systems are used for extracting the movement templates characterizing basic
actions contained in their recordings. In physical therapy and sports sciences, these templates are
employed to analyze patient?s progress or sports professional?s movements; in robotics, virtual reality, movie generation or computer games, they become the basic elements for composing complex
actions.
In order to obtain the movement templates, boundaries between actions need to be detected. Furthermore, fundamental similarities and differences in the dynamics underlying different actions need to
be captured. For example, in a recording from a game of table tennis, observations corresponding to
different actions can differ, due to different goals for hitting the ball, racket speeds, desired ball interaction, etc. The system needs to determine whether this dissimilarity corresponds to substantially
diverse types of underlying movements (such as in the case of a forehand and a backhand), or not
(such as in the case of two forehands that differ only in speed).
To date, most approaches addressed the problem by using considerable manual interaction [16]; an
important advancement would be to develop an automatic method that requires little human intervention. In this paper, we present a probabilistic model in which actions are assumed to arise from
noisy transformations of a small set of hidden trajectories, each representing a different movement
template, with non-linear time re-scaling accounting for differences in action durations. Action
boundaries are explicitly modeled through a set of discrete random variables. Segmentation is obtained by inferring, at each time-step, the position of the observations in the current action and the
underlying movement template. To guide segmentation, we impose constraints on the minimum and
maximum duration that each action can have.
1
Hidden dynamics
???
73
75
67
68
61
?t
?t+1
zt?1
zt
zt+1
vt?1
vt
vt+1
???
64
Observations
97
?t?1
94
56
53
53
(a)
51
h1:S
1:M
(b)
Figure 1: (a) The hidden dynamics shown on the top layer are assumed to generate the time-series
at the bottom. (b) Belief network representation of the proposed segmentation model. Rectangular
nodes indicate discrete variables, while (filled) oval nodes indicate (observed) continuous variables.
We apply the model to a human game of table tennis recorded with a Barrett WAM used as a haptic
input device, and show that we can obtain a meaningful segmentation of the time-series.
2
The Segmentation Model
In the proposed segmentation approach, the observations originate from a set of continuous-valued
hidden trajectories, each representing a different movement template. Specifically, we assume that
the observed time-series consists of a concatenation of segments (basic actions), each generated
through a noisy transformation of one of the hidden trajectories, with possible time re-scaling. This
generation process is illustrated in Figure 1 (a), where the observations on the lower graph are
generated from the three underlying hidden trajectories on the upper graph. Time re-scaling happens
during the generation process, e.g., the first hidden trajectory of length 97 gives rise to three segments
of length 75, 68 and 94 respectively.
The observed time-series and the S hidden trajectories are represented by the continuous random
H
i
S
1
variables1 v1:T ? v1 , . . . , vT (vt ? <V ) and h1:S
1:M ? h1:M , . . . , h1:M (hm ? < ), respectively.
Furthermore, we introduce two sets of discrete random variables ?1:T and z1:T . The first set is
used to infer which movement template generated the observations at each time-step, to detect action boundaries, and to define hard constraints on the minimum and maximum duration of each
observed action. The second set is used to model time re-scaling from the hidden trajectories to the
observations. We assume that the joint distribution of these variables factorizes as follows
Y
p(h1:S
p(vt |h1:S
1:M)
1:M , zt , ?t )p(zt |zt?1 , ?t?1:t )p(?t |?t?1 ).
t
These independence relations are graphically represented by the belief network of Figure 1 (b).
The variable ?t is a triple ?t = {st , dt , ct } with a similar role as in regime-switching models with
explicit regime-duration distribution (ERDMs) [4]. The variable st ? {1, . . . , S} indicates which
of the S hidden trajectories underlies the observations at time t. The duration variables dt specifies
the time interval spanned by the observations forming the current action, and takes a value between
dmin and dmax . The count variable ct indicates the time distance to the beginning of the next action,
taking value ct = dt and ct = 1 respectively at the beginning and end of an action. More specifically,
we define p(?t |?t?1 ) = p(ct |dt , ct?1 )p(dt |dt?1 , ct?1 )p(st |st?1 , ct?1 ) with2
1
For the sake of notational simplicity, we describe the model for the case of a single observed time-series
and hidden trajectories of the same length M .
2
We assume c0 = 1, cT = 1. For t = 1, p(s1 ) = ?
?s1 , p(d1 ) = ?d1 , p(c1 |d1 ) = ?(c1 = d1 ).
2
p(st |st?1 , ct?1 ) =
p(dt |dt?1 , ct?1 ) =
?st ,st?1
if ct?1 = 1,
?(st = st?1 ) if ct?1 > 1,
p(ct |dt , ct?1 ) =
?dt
if ct?1 = 1,
?(dt = dt?1 ) if ct?1 > 1,
?(ct = dt )
if ct?1 = 1,
?(ct = ct?1 ?1) if ct?1 > 1,
where ?(x = y) = 1 if x = y and ?(x = y) = 0 if x 6= y, ? is a matrix specifying the time-invariant
dynamics-switching distribution, and ? is a vector defining the action-duration distribution.
The variable zt indicates which of the M elements in the hidden trajectory generated the observations vt . We define p(zt |zt?1 , ?t?1:t ) = p(zt |zt?1 , dt , ct?1:t ) with
(
??zdtt ,ct if ct?1 = 1,
p(zt |zt?1 , dt , ct?1:t ) =
,ct
?zdtt,z
if ct?1 > 1.
t?1
The vector ??dt ,ct and the matrix ? dt ,ct encode two constraints3 . First, zt ? zt?1 ? {1, . . . , wmax }
ensures that subsequent observations are generated by subsequent elements of the hidden trajectory
and imposes a limit on the magnitude of time-warping. Second, zt ? {dt ? ct + 1, . . . , M ? ct + 1}
accounts for the dt ? ct and ct ? 1 observations preceding and following vt in the action.
The hidden trajectories follow independent linear
Markovian dynamics with Gaussian noise, that is
p(him |him?1 ) = N (F i him?1 , ?iH ), hi1 ? N (?i , ?i ).
Finally, the observations are generated from a linear
transformation of the hidden variables with Gaussian
noise
st
st st
p(vt |h1:S
1:M , zt , ?t ) = N (G hzt + ?dt ,t+ct ?1 , ?V ),
where the term ?dt ,t+ct ?1 is common to all observations belonging to the same action and allows for
spatial translation.
The generative process underlying the model is described in detail in4 Table 1.
The set ? of unknown model parameters is given by
1:S
1:S
? ?}.
, ?1:S , ?, ?
? , ?, ?, ?,
{F 1:S , G1:S , ?1:S
H , ?V , ?
After learning ?, we can sample a segmentation
from p(?1:T |v1:T ) or compute the most likely
?
= arg max?1:T p(?1:T |v1:T )5 .
segmentation ?1:T
Table 1: Model?s Generation Mechanism
for i = 1, . . . , S do
generate hidden trajectory i
hi1 ? N (?i , ?i )
h
h
him = F i him?1 + ?m
, ?m
? N (0, ?iH )
set t = 1
for action a = 1, . . . , A do
sample a dynamics type st ? ?:,st?1
sample a duration dt ? ?
mark the beginning of the action ct = dt
while ct ? 1 do
dt ,ct
sample time-warping zt ? ?:,z
t?1
generate the observations
vt = Gst hzstt + ?dt ,t+ct ?1 + ?tv
?tv ? N (0, ?Vst )
t=t+1
if ct?1 > 1 do
st = st?1 , dt = dt?1 , ct = ct?1 ?1
Relation to previous models. From a modeling point of view, the presented method builds on
previous approaches that consider the observed time-series as time-warped transformations of one
or several continuous-valued hidden trajectories. In [11], the authors introduced a model in which
different time-series are assumed to be generated by a single continuous-valued latent trace, with
spatial and time re-scaling. This model was used to align speech sequences. In [6], a modified
version of such a model was employed in the domain of helicopter flight to learn a desired trajectory
from demonstrations. In [12] and [14], the authors considered the case in which each time-series is
generated by one of a set of different hidden trajectories. None of these models can deal with the
situation in which possibly different dynamics underlie different segments of the same time-series.
From an application point of view, previous segmentation systems for extracting basic movements
employed considerable human intervention [16]. On the other hand, automatic probabilistic methods
for modeling movement templates assumed that the time-series data was pre-segmented into basic
movements [5, 17].
3
In the experiments we added the additional constraint that nearly complete movements are observed, that
is zt?dt +ct ? {1, . . . , ?}, zt+ct ?1 ? {, . . . , M } (see the Appendix for more details).
4
With ?:,st?1 we indicate the vector of transition probabilities from dynamics type st?1 to any dynamics.
5
Due to space limitations, we describe only how to sample a segmentation, which is required in the Gibbs
sampling method.
3
3
Inference and Learning
The interaction between the continuous and discrete hidden variables renders the computation of the
posterior distributions required for learning and sampling a segmentation intractable. In this section,
we present and analyze three different approximation methods for dealing with this problem. In the
first (variational) method, p(h1:S
1:M , z1:T , ?1:T |v1:T , ?) is approximated with a simpler distribution
q, and the optimal q and ? are found by maximizing a tractable lower bound on the log-likelihood
using an Expectation-Maximization (EM) approach. In the second (maximum a posteriori) method,
we estimate the most likely set of hidden trajectories and ? by maximizing p(h1:S
1:M , v1:T |?) using an
EM approach. In the third (Gibbs sampling) method, we use stochastic EM [3] with Gibbs sampling.
3.1
Variational Method
In the variational approximation, we introduce a distribution q in which the problematic dependence
between the hidden dynamics and the segmentation and time-warping variables is relaxed, that is6
1:S
q(h1:S
1:M , z1:T , ?1:T ) = q(h1:M )q(z1:T |?1:T )q(?1:T ) .
From the Kullback-Leibler divergence between this distribution and the original posterior distribution we obtain a tractable lower bound on the log-likelihood log p(v1:T |?), given by
B(q, ?) = Hq(h1:S
+ hHq(z1:T |?1:T ) iq(?
1:M )
1:T )
+ Hq(?1:T )
+ hlog p(v1:T |h1:S
1:M , z1:T , ?1:T , ?)iq(h1:S
1:M )q(z1:T |?1:T )q(?1:T )
+ hlog p(z1:T |?1:T , ?)iq(z1:T |?1:T )q(?1:T ) + hlog p(?1:T |?)iq(?1:T ) + log p(h1:S
1:M |?) q(h1:S
1:M )
,
where h?iq denotes expectation with respect to q, and Hq denotes the entropy of q. We then use a
variational EM algorithm in which B(q, ?) is iteratively maximized with respect to q and the model
parameters ? until convergence7 .
Maximization with respect to q leads to the following updates
1:S
q(h1:S
1:M ) ? p(h1:M )e
hlog p(v1:T |h1:S
1:M ,z1:T ,?1:T )iq(z
1:T |?1:T )q(?1:T )
h
i
q(?1:T ) ? p(?1:T )eHq(z1:T |?1:T ) e
hlog p(v1:T |h1:S
1:M ,z1:T ,?1:T )iq(h1:S )
1:M .
q(z1:T |?1:T ) ? p(z1:T |?1:T )e
,
log p(v1:T ,z1:T |h1:S
1:M ,?1:T ) q(h1:S )q(z
1:T |?1:T )
1:M
(1)
,
(2)
(3)
Before describing how to perform inference on these distributions, we observe that all quantities
required for learning ?, sampling a segmentation, and updating q(h1:S
1:M ) can be formulated such
that only partial inference on q(?1:T ) and q(z1:T |?1:T ) is required. For example, we can write
log p(v1:T |h1:S
1:M , z1:T , ?1:T ) q(z
1:T ,?1:T )
=
X
?ti,k,1
X
??i,k,t,m log p(v? |him , z? = m, ?ti,k,1 ), (4)
?,m
t,i,k
with ?ti,k,1 = q(?ti,k,1 ), ??i,k,t,m = q(z? = m|?ti,k,1 ), ?ti,k,1 = {st = i, dt = k, ct = 1}. Thus, only
posteriors for which the count variables take value 1 are required8 .
Q
1:S
i
Inference on q(h1:S
1:M ). We first notice that, by using (4) in (1), we obtain q(h1:M ) =
i q(h1:M ).
i
We then observe that we can rewrite the update for q(h1:M ) as proportional to the joint distribution
of the following linear gaussian state-space model (LGSSM)
h
h
i
v
v
? iV,m ),
him = F i him?1 + ?m
, ?m
? N (0, ?iH ), hi1 ? N (?i , ?i ), v?m
= Gi him + ?m
, ?m
? N (0, ?
6
Conditioning on v1:T in q is omitted for notational simplicity.
Maximization with respect to ? is omitted due to space limitations.
8
This is common to ERDMs using separate duration and count variables [4].
7
4
where
i
v?m
? 1/aim
X
t,k
?ti,k,1
t
X
??i,k,t,m v? ,
? iV,m ? 1/aim ?iV ,
?
aim ?
? =t?k+1
Therefore, inference on
q(h1:S
1:M )
X
?ti,k,1
t,k
t
X
??i,k,t,m .
? =t?k+1
can be accomplished with LGSSM smoothing routines [7].
Inference on q(?1:T ). By substituting update (3) (including the normalization constant) into update (2), we obtain q(?1:T ) ? q(v1:T |?1:T )p(?1:T ). This update has the form of the joint distribution of an ERDM using separate duration and count variables [4]. Therefore, we can employ
similar forward-backward recursions. More specifically ?ti,k,1 = q(?ti,k,1 ) = q(vt+1:T |st = j, ct =
1)q(?ti,k,1 , v1:t )/q(v1:T ) = ?ti,1 ?ti,k,1 /q(v1:T ), where
X
X
j,l,1
i,k,1
i,1
j,l,1
, ?tj,1 =
q(vt+1:t+k |?t+k
)?i,j ?t+k
?k .
?ti,k,1 = q(vt?k+1:t |?ti,k,1 )
p(?ti,k,1 |?t?k
) ?t?k
{z
}
|
i,k
jl
?k ?i,j
Since we have imposed the constraints c0 = 1, cT = 1, we need to replace terms such as p(dt =
k, ct = 1|ct?k = 1) = ?k with p(dt = k, ct = 1|ct?k = 1, c0 = 1, cT = 1). The constraint cT = 1
P
implies q(v1:T ) = j,l ?Tj,l,1 .
Required terms such as q(vt?k+1:t |?ti,k,1 ) can be computed as likelihood terms when performing
inference on q(zt?k+1:t |?ti,k,1 ).
Inference on q(z1:T |?1:T ). The form of update (3) implies that inference on distributions of the
type q(zt?k+1:t |?ti,k,1 ) can be accomplished with forward-backward routines similar to the ones
used in hidden Markov models (HMMs).
Sampling a segmentation. A segmentation can be sampled by using the factorization
QT ?1
q(?1:T |v1:T ) = q(?T |v1:T ) t=1 q(?t |?t+1 , v1:T ), with
?t+1
?t+1
p(?t+1 |?t )q(vt+1 |?t+1 )?t?t
.
q(?t |?t+1 , v1:T ) =
?t+1 ?t+1
?t+1
?t+1
Suppose that, at time t, ct = 1 and we have sampled dynamics type st = i and duration dt = k.
Then, ?t?k+1:t?1 and ct?k are determined by the model assumptions9 , so that we effectively need
to sample st?k , dt?k from the distribution q(st?k , dt?k , ct?k = 1|?t?k+1 , v1:T ), which is given by
:,:,1
?k ?i,: q(vt?k+1:t |?t )?t?k
/?ti,k,1 ,
i,k,k
since q(vt?k+2:t |?t?k+2:t )?t?k+1
= ?ti,k,1 .
3.2
Maximum a Posteriori (MAP) Method
Instead of approximating the posterior distribution of all hidden variables, we can approximate only
p(h1:S
1:M |v1:T ) with a deterministic distribution, by using the variational method described above in
which q(him ) is a Dirac delta around its mean. Notice that this is equivalent to compute the most
likely set of hidden trajectories and parameters by maximizing the joint distribution p(v1:T , h1:S
1:M |?)
with respect to h1:S
and
?
using
an
EM
algorithm.
1:M
3.3
Gibbs Sampling Method
In our stochastic EM approach with Gibbs sampling, the expectation of the complete-data logPN
n
n
? 1:S,n |?), where
likelihood L(?) is approximated by L(?) ?
?1:T
,?
?1:T
,h
1:M
n=1 log p(v1:T , z
1:S,n
n
n
1:S
?
z?1:T , ?
?1:T , h
1:M are samples drawn from p(h1:M , z1:T , ?1:T |v1:T ). Such samples can be obtained
by iterative drawing from the tractable conditionals
1:S
p(z1:T , ?1:T |h1:S
1:M , v1:T ) and p(h1:M |z1:T , ?1:T , v1:T ).
9
The values of c1:T ?1 are automatically determined if cT and d1:T are given.
5
Correct Seg.
Variational
Approx.
MAP
Approx.
Gibbs Sampling
Approx.
Time-series 1
1 24 42 66 89
1 17 39 62 82
1 17 39 63 82
1 17 38 62 81
1 14 39 63 79
1 20 40 64 85
1 19 40 64 84
1 17 39 63 82
1 20 40 64 85
1 17 39 63 82
1 22 41 64 88
1 17 40 65 81
1 17 40 64 82
Time-series 2
1 23 46 63
1 23 46 64
1 18 42 63
1 18 42 63
1 22 45 63
1 23 46 62
1 23 46 62
1 23 46 62
1 23 46 63
1 18 36 56
1 20 42 60
1 9 23 47 63 71
1 21 47 62
Time-series 3
1 23 40 63
1 21 39 62
1 22 38 62
1 17 38 60 87
1 23 38 62
1 21 40 64
1 21 40 63
1 22 40 65
1 22 40 63
1 23 38 62
1 16 35 61 82
1 17 40 64 84
1 17 37 60 86
Time-series 4
1 22 47 68
1 23 47 68
1 22 47 66
1 9 23 31 60 66
1 9 23 31 46 67
1 22 47 69
1 22 47 68
1 22 47 68
1 15 20 45 67
1 14 24 38 63
1 14 24 38 63
1 9 22 32 47 68
1 9 23 31 52 74
Time-series 5
1 24 42 65 88 105
1 18 42 65 82 100
1 18 42 65 82 99
1 6 12 42 58 76 83 100
1 11 18 42 60 85 102
1 18 40 55 65 82 97
1 18 42 64 82 98
1 18 42 65 82 96
1 11 19 40 60 85 102
1 16 44 71 97
1 16 40 63 80 102
1 22 44 63 89 104
1 7 13 21 31 58 71 101 114
Table 2: Segmentations given by the variational, MAP and Gibbs sampling methods on 5 artificial
time-series.
In order to sample from p(z1:T , ?1:T |h1:S
1:M , v1:T ), we can first sample a segmentation from
,
v
)
employing
the
method
described above (with q(?) replaced by p(?|h1:S
p(?1:T |h1:S
1:T
1:M , v1:T )),
1:M
and then use a HMM forward-filtering backward-sampling method for sampling from
1:S
p(z1:T |?1:T , h1:S
1:M , v1:T ). Finally, sampling from p(h1:M |z1:T , ?1:T , v1:T ) may be carried out using the forward-filtering backward-sampling procedure described in [8].
3.4
Comparison of the Approximation Methods
In this section, we compare the performance of the approximation methods presented above on 5
artificially generated time-series. Each time-series (with V=2 or V=3) contains repeated occurrences
of actions arising from the noisy transformation of up to three hidden trajectories with time-warping.
In the second row of Table 2, we give the correct segmentation for each time-series. Each number
indicates the time-step at which a new action starts, whilst the colors indicate the types of dynamics
underlying the actions. In the rows below, we give the segmentations obtained by each approximation method with 4 different initial random conditions (with minimum and maximum action duration
between 5 and 30).
From the results, we can deduce that Gibbs sampling performs considerably worse than the deterministic approaches. Between the variational and MAP methods, the latter is preferable and gives
a good solution in most cases. The poor performance of Gibbs sampling can be explained by the
fact that this method cannot deal well with high correlation between h1:S
1:M and ?1:T , z1:T . The
continuous hidden variables are sampled given a single set of segmentation and time-warping variables (unlike update (1) in which we average over segmentation and time-warping variables), which
may result in poor mixing. The inferior performance of the variational method in comparison to
the MAP method would seem to suggest that the posterior covariances of the continuous hidden
variables cannot accurately be estimated.
4
Table Tennis Recordings using a Robot Arm
In this section, we show how the proposed model performs in segmenting time-series obtained
from table tennis recordings using a robot arm moved by a human. The generic goal is to extract
movement templates to be used for robot imitation learning [2, 9]. Here, kinesthetic teach-in can be
advantageous in order to avoid the correspondence problem.
We used the Barrett WAM robot arm shown in Figure 2 as a haptic input device for recording
and replaying movements. We recorded a game of table tennis where a human moved the robot
arm making the typical moves occurring in this specific setup. These naturally include forehands,
going into an awaiting posture for a forehand, backhands, and going into an awaiting posture for
a backhand. They also include smashes, however, due to the inertia of the robot, they are hard to
perform and only occur using the forehand.
6
Positions
2.5
0
?2.5
Velocities
4.0
0
?4.7
3 2 7 2 72 7 1 5 1 5 1 4
7 27 1 5 1 5 67 1 5 1 5 6 7
1 5 1 5 6 72 7 1 5 1 5 6 72 7 1 5 1 5 4 2 7 1 3 1 3 1 5 6 7
Figure 3: This figure shows the first three degrees of freedom (known as flexion-extension,
adduction-abduction and humerus rotation) of a robot arm when used by a human as a haptic input device playing table tennis. The upper graph shows the joint positions while the lower one
shows the joint velocities. The dashed vertical lines indicate the obtained action boundaries and the
numbers the underlying movement templates. This sequence includes moves to the right awaiting
posture (1), moves to the left awaiting posture (2), forehands (3, 5), two incomplete moves towards
the awaiting posture merged with a backhand (4), moves to the left awaiting posture with humerus
rotation (6) and backhands (7).
The recorded time-series contains the joint positions
and velocities of all seven degrees of freedom (DoF)
of an anthropomorphic arm. However, only the
shoulder and upper arm DoF, which are the most significant in such movements, were considered for the
analysis. The 1.5 minutes long recording was subsampled at 5 samples per seconds. The minimum
and maximum durations dmin and dmax were set to
4 and 15 respectively, as prior knowledge about table tennis would suggest that basic-action durations
are within this range. We also imposed the constraint that nearly complete movements are observed
(? = 2, = M ? 1). The length of the hidden dynamics M was set to dmax , the variable wmax was Figure 2: The Barrett WAM used for recordset to10 4, and the number of movement templates ing the table tennis sequences. During the
S was set to 8, as this should be a reasonable upper experiment the robot is in a gravity compenbound on the number of different underlying move- sation and sequences can be replayed on the
ments. Given the results obtained in the previous real system.
section, we used the MAP approximation method.
We assumed no prior knowledge on the dynamics of the hidden trajectories. However, in a real application of the model we could simplify the problem by incorporating knowledge about previously
identified movement templates.
As shown in Figure 3, the model segments the time-series into 59 basic movements of forehands
(numbers 3, 5), backhands (7), and going into a right (1) and left (2, 6) awaiting posture. In some
cases, a more fluid game results in incomplete moves towards an awaiting posture and hence into
a composite movement that can no longer be segmented (4). Also, there appear to be two types
of moving back to the left awaiting posture: one which needs untwisting of the humerus rotation
degree of freedom (6), and another which purely employs shoulder degrees of freedom (2).
The action boundaries estimated by the model are in strong agreement with manual visual segmentation, with the exception of movements 4 that should be segmented into two separate movements.
At the web-page http://silviac.yolasite.com we provide a visual interpretation of the segmentation
from which the model accuracy can be appreciated.
10
This is the smallest value that ensures that complete actions can be observed.
7
5
Conclusions
In this paper we have introduced a probabilistic model for detecting repeated occurrences of basic movements in time-series data. This model may potentially be applicable in domains such
as robotics, sports sciences, physical therapy, virtual reality, artificial movie generation, computer
games, etc., for automatic extraction of the movement templates contained in a recording. We have
presented an evaluation on table tennis movements that we have recorded using a robot arm as haptic input device, showing that the model is able to accurately segment the time-series into basic
movements that could be used for robot imitation learning.
Appendix
Constraints on z1:T
Consider an action starting at time 1 and finishing at time t with the constraints z1 ? {1, . . . , ?} and
zt ? {, . . . , M }. Suppose that z? = m for ? ? {1, . . . , t ? 1}. Then it must be
1. m ? {max[?, ? (t ? ? )wmax ], . . . , min[? + (? ? 1)wmax , M ? (t ? ? )]}.
2. z? +1 ? {max[m + 1, ? (t ? ? ? 1)wmax ], . . . , min[m + wmax , M ? (t ? ? ? 1)]}.
? ? with time-dependent priors with zero values
Therefore, we need to modify the original priors ?,
outside the appropriate range.
References
[1] R. Boulic, B. Ulicny, and D. Thalmann. Versatile walk engine. Journal of Game Development,
1(1):29?52, 2004.
[2] S. Calinon, F. Guenter, and A. Billard. On learning, representing and generalizing a task in a
humanoid robot. IEEE Transactions on Systems, Man and Cybernetics, Part B, 37(2):286?298,
2007.
[3] G. Celeux and J. Diebolt. The SEM algorithm: A probabilistic teacher algorithm derived from
the EM algorithm for the mixture problem. Computational Statistics Quarterly, 2:73?82, 1985.
[4] S. Chiappa. Hidden Markov switching models with explicit regime-duration distribution. Under submission.
[5] S. Chiappa, J. Kober, and J. Peters. Using Bayesian dynamical systems for motion template
libraries. In Advances in NIPS 21, pages 297?304, 2009.
[6] A. Coates, P. Abbeel, and A. Y. Ng. Learning for control from multiple demonstrations. In
Proceedings of ICML, pages 144?151, 2008.
[7] J. Durbin and S. J. Koopman. Time Series Analysis by State Space Methods. Oxford Univ.
Press, 2001.
[8] S. Fr?uhwirth-Schnatter. Data augmentation and dynamic linear models. Journal of Time-Series
Analysis, 15:183?202, 1994.
[9] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor
primitives. In Advances in NIPS 15, pages 1547?1554, 2003.
[10] U. Kersting, P. McAlpine, B. Rosenhahn, H. Seidel, and R. Klette. Marker-less human motion
tracking opportunities for field testing in sports. Journal of Biomechanics, 39:S191?S191,
2006.
[11] J. Listgarten, R. M. Neal, S. T. Roweis, and A. Emili. Multiple alignment of continuous time
series. In Advances in NIPS 17, pages 817?824, 2005.
[12] J. Listgarten, R. M. Neal, S. T. Roweis, R. Puckrin, and S. Cutler. Bayesian detection of
infrequent differences in sets of time series with shared structure. In Advances in NIPS 19,
pages 905?912, 2007.
[13] R. McDonnell, S. J?.org, J. K. Hodgins, F. N. Newell, and C. O?Sullivan. Evaluating the effect
of motion and body shape on the perceived sex of virtual characters. ACM Transactions on
Applied Perception, 5(4), 2009.
8
[14] W. Pan and L. Torresani. Unsupervised hierarchical modeling of locomotion styles. In Proceedings of ICML, 2009.
[15] M. Peinado, D. Maupu, D. Raunhardt, D. Meziat, D. Thalmann, and R. Boulic. Full-body
avatar control with environment awareness. IEEE Computer Graphics and Applications, 29(3),
2009.
[16] W. Takano, K. Yamane, and Y. Nakamura. Capture database through symbolization, recognition and generation of motion patterns. In Proceedings of ICRA, pages 3092?3097, 2007.
[17] B. Williams, M. Toussaint, and A. Storkey. Modelling motion primitives and their timing in
biologically executed movements. In Advances in NIPS 20, pages 1609?1616, 2008.
[18] K. Yamane and J. K. Hodgins. Simultaneous tracking and balancing of humanoid robots for
imitating human motion capture data. In Proceedings of IROS, pages 2510?2517, 2009.
9
| 4109 |@word version:1 advantageous:1 c0:3 sex:1 accounting:1 covariance:1 versatile:1 initial:1 series:30 contains:2 current:2 com:1 must:1 subsequent:2 shape:1 motor:1 update:7 generative:1 device:5 advancement:1 beginning:3 detecting:2 node:2 org:1 simpler:1 become:2 consists:1 introduce:2 mpg:1 automatically:2 little:1 underlying:8 substantially:1 whilst:1 transformation:6 ti:21 symbolization:1 gravity:1 preferable:1 uk:2 control:2 underlie:1 intervention:2 appear:1 planck:1 segmenting:1 before:1 timing:1 modify:1 limit:1 switching:3 oxford:1 specifying:1 humerus:3 hmms:1 factorization:1 range:2 testing:1 sullivan:1 procedure:1 jan:2 area:1 composite:1 pre:1 road:1 diebolt:1 suggest:2 kinesthetic:1 cannot:2 equivalent:1 imposed:2 map:6 deterministic:2 maximizing:3 graphically:1 primitive:2 starting:1 duration:14 williams:1 rectangular:1 simplicity:2 spanned:1 avatar:1 suppose:2 infrequent:1 locomotion:1 agreement:1 element:3 velocity:3 approximated:2 recognition:1 updating:1 storkey:1 submission:1 database:1 observed:10 bottom:1 role:1 capture:3 seg:1 ensures:2 movement:32 environment:1 cam:1 dynamic:16 rewrite:1 segment:8 purely:1 joint:7 represented:2 awaiting:9 univ:1 describe:2 artificial:3 detected:1 outside:1 dof:2 valued:3 drawing:1 statistic:1 gi:1 g1:1 emili:1 noisy:4 sequence:5 listgarten:2 interaction:3 helicopter:1 kober:1 fr:1 backhand:7 date:1 mixing:1 uhwirth:1 roweis:2 moved:2 dirac:1 iq:7 develop:1 ac:1 chiappa:3 qt:1 progress:1 strong:1 variables1:1 indicate:5 implies:2 differ:2 merged:1 correct:2 stochastic:2 human:9 virtual:4 abbeel:1 anthropomorphic:1 biological:1 extension:1 therapy:3 considered:2 around:1 boulic:2 substituting:1 smallest:1 omitted:2 perceived:1 applicable:1 him:10 repetition:1 successfully:1 gaussian:3 aim:3 modified:1 avoid:1 kersting:1 factorizes:1 forehand:8 encode:1 derived:1 finishing:1 schaal:1 notational:2 modelling:1 indicates:4 likelihood:4 abduction:1 hzt:1 detect:1 posteriori:2 inference:9 dependent:1 hidden:31 relation:2 going:3 germany:1 arg:1 development:1 spatial:2 smoothing:1 field:1 extraction:2 ng:1 sampling:16 icml:2 nearly:2 unsupervised:1 simplify:1 torresani:1 employ:2 few:1 divergence:1 subsampled:1 replaced:1 attractor:1 freedom:4 detection:1 evaluation:1 alignment:1 mixture:1 cutler:1 tj:2 partial:1 filled:1 iv:3 incomplete:2 walk:1 re:6 desired:2 modeling:3 markovian:1 maximization:3 calinon:1 graphic:1 teacher:1 considerably:1 st:24 fundamental:1 probabilistic:5 augmentation:1 recorded:5 possibly:1 worse:1 warped:1 style:1 account:1 koopman:1 de:1 includes:1 explicitly:1 vst:1 h1:42 view:2 analyze:3 start:1 accuracy:1 maximized:1 landscape:1 sation:1 bayesian:2 accurately:2 none:1 trajectory:21 cybernetics:2 simultaneous:1 manual:2 naturally:1 sampled:3 thalmann:2 color:1 knowledge:3 segmentation:24 routine:2 back:1 dt:34 follow:1 replayed:1 furthermore:2 until:1 correlation:1 flight:1 hand:1 web:1 wmax:6 marker:1 widespread:1 effect:1 hence:1 laboratory:1 leibler:1 iteratively:1 s191:2 statslab:1 illustrated:1 deal:2 neal:2 spemannstrasse:1 game:8 during:2 inferior:1 guenter:1 complete:4 demonstrate:1 ehq:1 performs:2 motion:7 variational:9 common:2 rotation:3 mcalpine:1 physical:3 conditioning:1 jl:1 interpretation:1 in4:1 significant:1 cambridge:1 gibbs:9 automatic:3 approx:3 moving:1 robot:13 tennis:11 similarity:1 longer:1 etc:3 align:1 deduce:1 posterior:5 vt:17 accomplished:2 captured:1 minimum:4 additional:1 relaxed:1 impose:1 preceding:1 employed:3 determine:1 dashed:1 multiple:2 full:1 infer:1 seidel:1 segmented:3 ing:1 long:1 biomechanics:1 nakanishi:1 underlies:1 basic:11 patient:1 expectation:3 normalization:1 robotics:3 c1:3 conditionals:1 addressed:1 interval:1 unlike:1 haptic:5 recording:7 seem:1 extracting:3 switch:1 variety:1 independence:1 identified:1 whether:1 wilberforce:1 render:1 peter:3 speech:1 action:33 generate:3 specifies:1 http:1 problematic:1 coates:1 notice:2 delta:1 arising:1 estimated:2 per:1 diverse:1 discrete:4 write:1 drawn:1 hi1:3 iros:1 backward:4 v1:33 graph:3 reasonable:1 appendix:2 scaling:6 layer:1 ct:58 bound:2 correspondence:1 durbin:1 occur:1 constraint:8 sake:1 lgssm:2 speed:2 min:2 performing:1 gst:1 flexion:1 tv:2 ball:2 poor:2 mcdonnell:1 belonging:1 em:7 character:1 pan:1 wam:3 making:1 happens:1 s1:2 biologically:1 explained:1 invariant:1 imitating:1 previously:1 dmax:3 count:4 mechanism:1 describing:1 tractable:3 end:1 apply:1 observe:2 quarterly:1 hierarchical:1 generic:1 appropriate:1 occurrence:2 professional:1 original:2 top:1 denotes:2 include:2 opportunity:1 build:1 approximating:1 icra:1 warping:6 move:7 added:1 quantity:1 posture:9 rosenhahn:1 dependence:1 hq:3 distance:1 separate:3 concatenation:2 hmm:1 originate:1 seven:1 tuebingen:2 length:4 modeled:2 demonstration:2 setup:1 executed:1 hlog:5 potentially:1 teach:1 trace:1 rise:1 fluid:1 zt:23 unknown:1 perform:2 upper:4 dmin:2 observation:15 vertical:1 markov:2 billard:1 yamane:2 defining:1 situation:1 shoulder:2 introduced:2 required:5 z1:27 engine:1 nip:5 able:1 below:1 dynamical:1 perception:1 pattern:1 regime:3 max:4 including:1 belief:2 nakamura:1 recursion:1 replaying:1 arm:9 representing:4 movie:3 library:1 carried:1 hm:1 extract:1 prior:4 generation:7 limitation:2 proportional:1 filtering:2 triple:1 toussaint:1 humanoid:2 awareness:1 degree:4 imposes:1 intractability:1 playing:1 balancing:1 translation:1 row:2 appreciated:1 guide:1 institute:1 template:14 taking:1 characterizing:2 peinado:1 boundary:5 transition:1 evaluating:1 author:2 forward:4 inertia:1 employing:1 transaction:2 approximate:1 kullback:1 dealing:2 assumed:5 imitation:2 continuous:9 latent:1 iterative:1 table:14 reality:3 learn:1 composing:1 sem:1 complex:1 artificially:1 domain:2 hodgins:2 silvia:2 noise:2 arise:1 puckrin:1 repeated:2 body:2 schnatter:1 inferring:1 position:4 explicit:2 third:1 minute:1 specific:1 showing:1 barrett:3 ments:1 consist:1 intractable:1 ih:3 incorporating:1 effectively:1 dissimilarity:1 magnitude:1 occurring:1 racket:1 entropy:1 generalizing:1 likely:3 forming:1 visual:2 hitting:1 contained:2 tracking:2 sport:5 corresponds:1 newell:1 acm:1 goal:2 formulated:1 towards:2 replace:1 man:1 considerable:2 hard:2 shared:1 specifically:3 determined:2 typical:1 oval:1 celeux:1 ijspeert:1 meaningful:1 exception:1 mark:1 latter:1 d1:5 |
3,434 | 411 | Simulation of the Neocognitron on a CCD
Parallel Processing Architecture
Michael L. Chuang and Alice M. Chiang
M.I.T Lincoln Laboratory
Lexington, MA 02173
e-mail: [email protected]
Abstract
The neocognitron is a neural network for pattern recognition and feature
extraction. An analog CCD parallel processing architecture developed
at Lincoln Laboratory is particularly well suited to the computational requirements of shared-weight networks such as the neocognitron, and implementation of the neocognitron using the CCD architecture was simulated.
A modification to the neocognitron training procedure, which improves
network performance under the limited arithmetic precision that would be
imposed by the CCD architecture, is presented.
1
INTRODUCTION
Multilayer neural networks characterized by local interlayer connectivity and groups
of nodes that are constrained to have the same weights on their input lines are often
refered to as shared-weight networks. A group of nodes with identical weights where
each node is connected to a different portion of the layer immediately beneath can
be thought of as a collection of spatially replicated receptive fields. Among the
desirable attributes of shared-weight networks is the fact that substantially less
storage is required for weights than would be required by a more conventional network with a comparable number of nodes. Furthermore, reducing the number of
free parameters through use of shared weights and local receptive fields, as opposed to simply reducing the number of hidden nodes, may be an effective way of
obtaining good generalization when only a small training set is available (Martin
and Pittman, 1989). However, the most immediately obvious attribute of a sharedweight architecture is that the replicated receptive fields allow a learned feature
to be detected anywhere within the input. This feature is particularly useful in
1039
1040
Chuang and Chiang
tasks where position invariance is required (Le Cun, 1989). Neural networks using
shared weights have been applied successfully to areas ranging from handwritten
digit recognition (Le Cun, Boser, et. al., 1989) to phoneme extraction in speech
preprocessing (Waibel, et. al., 1989).
A CCD architecture that is well suited to implementing shared-weight networks has
been developed at Lincoln Laboratory (Chiang and LaFranchise, 1991). This architecture performs high-speed inner prod uct computations and is able to accommodate the often complicated data access patterns of a shared-weight network without
imposing the burden of this complexity on the host computer; input and output
to devices built using this architecture are simple. The neocognitron (Fukushima,
1988) was selected as a candidate for implementation by the CCD architecture.
In particular, we were interested the effect that limited precision arithmetic might
have on network performance.
2
THE NEOCOGNITRON
The neocognitron is a multilayer feed-forward neural network for pattern recognition. The nodes or cells in each layer or le~el of the neocognitron are further
subdivided into cell planes, where all the nodes in a given cell plane are feature
detectors tuned to the same feature but connected to a different portion of the level
immediately beneath (the first level has cell planes connected directly to the input).
Each cell plane can be viewed as an array of identical, overlapping receptive fields.
Three types of processing elements or nodes are used in the neocognitron. S-cells
perform feature extraction, c-cells compensate for local shifts of features, and v-cells
are intended to prevent random excitation of s-cells. A given cell plane contains
only one type of node. A cell plane containing only s-cells, for example, is thus
called an s-plane. Each level of the network contains several s-planes, an identical
number of c-planes, and exactly one v-plane. The function of an s-cell is to generate
a nonlinear function of the inner product of a stored weight template a>.(k, /\', i, j)
and the contents of its receptive field. (In this notation A denotes the level of
the s-plane with which the template is associated, and the k and/\' indicate the
particular s- and c-planes between which the template serves as a connection. The
i, j are spatial coordinates within the template.) An s-plane is therefore a feature
map of its input. Each c-plane is paired with a single s-plane of the same level. A ccell has a small receptive field on its correpsonding s-plane and performs a weighted
average of the values of the s-cells to which it is connected. This implements a
form of local feature-shift invariance, and a c-plane is a feature map of its input
which is unchanged by small translations of features in the input. A schematic of a
three-level neocognitron is shown in Figure 1.
The cell planes in the first level of the network typically correspond to maps of simple
features such as oriented line segments. The second level of the neocognitron is given
the output of the first-level c-planes as input, and tends to form more complicated
features from the first-level cell planes. Successively higher levels correspond to even
more complex features; at the top level, each c-cell (of which there is exactly one in
each top-level c-plane) corresponds to one input pattern in a trained neocognitron.
The basic idea is to break up each input pattern into simple components such as
line segments and corners, then to put the pieces back together, allowing a certain
Simulation of the Neocognitron on a CCD Parallel Processing Architecture
An image feature extractor (IFE) device suitable for performing the inner products
required by a neural network with local receptive fields and shared weights has
been fabricated (Chiang and LaFranchise, 1991). The IFE consists of a 775-stage
CCD tapped delay line for holding and shifting input pixels or node values, 49
eight-bit, four-quadrant mUltiplying digital-to-analog converters (MDACs), and onchip storage for 980 eight-bit digital weights. Figure 2 is a photomicrograph of
the chip, which has an area of 29 mm2 and performs over one billion arithmetic
operations/second when clocked at 10 MHz. The device dissipates less than 1 W.
The 49 MDACs of the IFE are arranged in a 7 x 7 array; each l\lDAC nondestructively senses the value held in an appropriate point along the 775-stage tapped delay
line, which holds six 128-pixel lines, plus seven pixels of the following line, of the
input image. Image pixels are continuously loaded into the device in row-by-row
fashion. Each MDAC has a local memory of twenty eight-bit digital weights for
holding inner product kernel or template values. Conceptually, the device scans a
7 x 7 "window" over an input array, shifting one position at each step, and computes
the inner product of each of the twenty templates with the portion of the image
beneath the window. The multiplications of each inner product are performed in
parallel and the partial sums are connected to a common output line, allowing the
complete inner product to be computed in one clock. In actuality, the device passes
the input image under the 7 x 7 window, performing twenty inner products with
each shift of the image. A schematic of data flow through the IFE device is shown
in Figure 3.
Figure 2: Photomicrograph of the CCD Image Feature Extractor
4
A MODIFIED TRAINING ALGORITHM
Most computer simulations of the neocognitron have used floating point arithmetic
as well as weights which are, for all practical purposes, real numbers. However,
a neocognitron implemented using an IFE device would use fairly low precision
1041
1042
Chuang and Chiang
amount of relative position shift between the pieces at each stage of reassembly.
This allows the network to identify deformed or shifted inputs. The extent to which
a particular network is able to tolerate deformation of input patterns depends on
the amount of overlap between adjacent receptive fields as well as the size and
weighting of c-cell receptive fields.
The output of an s-cell is given by
?..
?..
?..
c-plana
0
t
I-planes
ICO
l;tO
'. (I.,..,,,). {O-
'a'?
]
1
when
a. +
I?
1: t
..I
t".(l.... I.J) .C._1( ...III+I-I."+J-I)
-I
a. +~ . ? ,(l).",(,..,,,)
1+"
-II::.I
and c-cells compute
c-PlaneaD
t
i
JI
!
a.::
,.(l.,..,,,). {~.
1+,
'cO
,;to
where
,. t...
td,(i.J) .,,(l.III+I-I."+J-I)
I"<
Figure 1: Schematic of a Three-Level Neocognitron
The majority of the computation in the neocognitron consists of the inner prod uets.
A good implementation of shared-weight networks such as the neocognitron must be
capable of performing high speed inner product computations as well as supporting
the data access patterns of the algorithm efficiently. A device which meets both
these requirements is described in the following section.
3
THE ThfAGE FEATURE EXTRACTOR
The neoeognitron is most easily visualized as a three-dimensional structure built of
the 8-, c- and v-cells, but the s- and c-planes can be generated by raster scanning
weight templates whose values are the a~(k, It, i, j) or the d~(i, j), respectively, over
the appropriate input. This operation can be performed efficiently by the CCD
architecture alluded to in the Introduction. In this architecture, analog node values
are represented using charge packets while fully programmable weight values are
stored digitally on-chip. The multiplications of the generic weighted sum computation are performed in parallel, with the summation performed in the charge domain,
yielding a complete inner product sum on each clock.
Simulation of the Neocognitron on a CCD Parallel Processing Architecture
? ??
? ??
???
? ??
?
?
?
Output
?
128 PIXELS
4
? ??
?
?
?
Input Image
128 slages
Figure 3: Dataflow in the Image Feature Extractor
100
?
?
?
?
?
100
80
80
80
?
70
_
60
6
7
8
8
~ not rejcaed
~
to
cmecdy iderIIiflCd d
!hose not rejected
?
onginaJ
-
modified
80
t1
12
precision (bits)
(a)
70
noatiD'
poiDI
8
7
8
10
8
11
12
(b)
Figure 4: (a) Effect of Arithmetic Precision on Classification (b) Comparision of
Original and Modified Training Procedures
1043
1044
Chuang and Chiang
arithmetic and quantized weights. In order to determine whether the neocognitron
would continue to perform under such restrictions, a software simulation of neocognitrons using low precision arithmetic was implemented. Weights were taken from
a network that was previously trained using floating point arithmetic and quantized
to a number of bits equal to the arithmetic precision. As can be seen from Figure
4(a), the fraction of inputs correctly identified (bottom curve) from a test set of
handwritten letters decreases substantially as arithmetic precision is reduced. Although the error rate (top curve) remains approximately constant, lower arithmetic
precision tends to increase the number of rejections.
4.1
EFFECT OF LIMITED PRECISION
Inspection of the weights revealed that the range of weights from previously trained
nets was too large to be represented using the number of bits available. Either small
weights were set to zero, large weights were clipped, or both. Networks trained using
low precision arithmetic tended to group all input patterns into a single category.
This can again be attributed to the restricted range of possible weight values. The
neocognitron training algorithm consists of assigning small random initial values
to weights and presenting training inputs. The connection weights that produce
strong responses are increased according to
ar (l,K,;,i) = arl(l, K,I,j) + &l[(I:,K,i,})
&lI(l,K,i,i)
=ql '/'-1 (i,j)' Ci_I(K,m, +; -I,ll, + } -I) ~ 0
br(l)::: bt'(A:) + 6bI(l)
6bI(l)=q, ?vi(mT.IIT)~O
where "y is an update index. Restricted to a fairly small range of numbers, weights
could not be increased to the point where the contribution of the cell planes whose
initial random weights were unchanged became negligible. Those initial weights
that were not updated contribute random features to the recognition process; the
effect is that of adding noise.
4.2
WEIGHT NORMALIZATION
In order to reduce the effects of clipping on the quantized weights, the weight update
algorithm was modified. As can be seen from the weight update equations, the standard training procedure allows the a~(k, K, i, j) values to grow without bound. The
inner product of the weights and the input is normalized implicitly when compu ting
the s-cell output. Rather than using the available numerical range so lavishly, the
algorithm was modified to normalize the a~(k, K, i, j) templates explicitly during
training after they reached a prespecified bound . The reduction in classification
performance as computational precision decreases is compared between neocognitrons trained using the modified algorithm and networks trained using the original
algorithm in Figure 4(b). Networks trained using the modified algorithm have
Simulation of the N eocognitron on a CCD Parallel Processing Architecture
somewhat higher (less than 5 percent) rejection and error rates compared to original networks when using floating point arithmetic, but demonstrate significantly
better performance when computational precision is limited to eight bits or less.
5
SUMMARY
We have presented a CCD architecture that is well matched to the computational
requirements of shared-weight neural networks with local connectivity. The implementation of the neocognitron, a shared-weight network for pattern recognition and
feature extraction, was simulated and a new training procedure that significantly
improves classification when limited precision arithmetic is used, is presented.
Acknowledgements
This work was supp orted by the Office of Naval Resarch, DARPA, and the Department of the Air Force.
References
A. M. Chiang and J. R. LaFranchise, "A Programmable Image Processor," to appear
in the ISSCC Digest of Technical Papers 1991.
M. L. Chuang, A Study of the Neocognitron Pattern Recognition Algorithm. Master's
Thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering and
Computer Science, Cambridge, MA, June 1990.
K. Fukushima, "A Neural Network for Visual Pattern Recogniton," IEEE Computer, vol. 21, no. 3. pp. 65-75, March, 1988.
Y. Le Cun, "Generalization and Network Design Strategies," Technical Report CRGTR-89-4, Department of Computer Science, University of Toronto, 1989.
Y. Le Cun, B. Boser, J. Denker, J. Henderson, D. Howard, R. Hubbard, and
1. Jackel, "Handwritten Digit Recognition with a Back-Propagation Network," in
D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, pp.
396-404, San Mateo, CA: Morgan Kaufmann, 1989.
G. 1. Martin and J. A. Pittman, "Recognizing Hand-Printed Letters and Digits,"
in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, pp.
405-414, San Mateo, CA: Morgan Kaufmann, 1989.
A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, "Phoneme Recognition Using Time-Delay Neural Networks," IEEE Trans. on Acoustics, Speech and
Signal Processing, vol. 37, no. 3, pp. 329-339, March 1989.
1045
| 411 |@word deformed:1 simulation:6 accommodate:1 reduction:1 initial:3 contains:2 tuned:1 lang:1 assigning:1 must:1 numerical:1 update:3 selected:1 device:9 plane:24 inspection:1 prespecified:1 chiang:7 quantized:3 contribute:1 node:11 toronto:1 along:1 consists:3 isscc:1 interlayer:1 td:1 window:3 notation:1 matched:1 substantially:2 developed:2 lexington:1 ife:5 fabricated:1 charge:2 exactly:2 appear:1 t1:1 negligible:1 engineering:1 local:7 tends:2 meet:1 approximately:1 onchip:1 might:1 plus:1 mateo:2 alice:1 co:1 limited:5 range:4 bi:2 practical:1 implement:1 digit:3 procedure:4 area:2 thought:1 significantly:2 printed:1 quadrant:1 storage:2 put:1 restriction:1 conventional:1 imposed:1 map:3 immediately:3 array:3 coordinate:1 updated:1 tapped:2 element:1 hose:1 recognition:8 particularly:2 bottom:1 electrical:1 connected:5 decrease:2 digitally:1 complexity:1 trained:7 uets:1 segment:2 easily:1 darpa:1 chip:2 iit:1 represented:2 effective:1 detected:1 whose:2 hanazawa:1 net:1 product:10 beneath:3 lincoln:3 normalize:1 billion:1 requirement:3 produce:1 strong:1 implemented:2 indicate:1 arl:1 attribute:2 packet:1 implementing:1 subdivided:1 generalization:2 summation:1 hold:1 purpose:1 jackel:1 hubbard:1 successfully:1 weighted:2 mit:1 modified:7 rather:1 office:1 june:1 naval:1 el:1 typically:1 bt:1 hidden:1 interested:1 pixel:5 among:1 classification:3 constrained:1 spatial:1 fairly:2 field:9 equal:1 extraction:4 identical:3 mm2:1 report:1 micro:1 oriented:1 floating:3 intended:1 fukushima:2 henderson:1 yielding:1 sens:1 held:1 capable:1 partial:1 deformation:1 increased:2 ar:1 mhz:1 clipping:1 delay:3 recognizing:1 too:1 stored:2 scanning:1 michael:1 together:1 continuously:1 connectivity:2 again:1 thesis:1 successively:1 opposed:1 pittman:2 containing:1 corner:1 compu:1 li:1 supp:1 mdac:1 explicitly:1 depends:1 vi:1 piece:2 performed:4 break:1 portion:3 reached:1 parallel:7 complicated:2 contribution:1 air:1 became:1 phoneme:2 loaded:1 efficiently:2 kaufmann:2 correspond:2 identify:1 conceptually:1 handwritten:3 multiplying:1 processor:1 detector:1 tended:1 touretzky:2 ed:2 raster:1 pp:4 obvious:1 associated:1 attributed:1 dataflow:1 massachusetts:1 improves:2 back:2 feed:1 higher:2 tolerate:1 response:1 arranged:1 furthermore:1 anywhere:1 uct:1 stage:3 rejected:1 clock:2 hand:1 nonlinear:1 overlapping:1 propagation:1 effect:5 normalized:1 spatially:1 laboratory:3 adjacent:1 ll:2 during:1 excitation:1 clocked:1 neocognitron:24 presenting:1 complete:2 demonstrate:1 performs:3 percent:1 ranging:1 image:10 common:1 mt:1 ji:1 analog:3 cambridge:1 imposing:1 access:2 certain:1 continue:1 seen:2 morgan:2 somewhat:1 determine:1 signal:1 arithmetic:14 ii:1 ico:1 desirable:1 technical:2 characterized:1 compensate:1 host:1 paired:1 schematic:3 basic:1 multilayer:2 kernel:1 normalization:1 cell:23 grow:1 pass:1 flow:1 revealed:1 iii:2 architecture:15 converter:1 identified:1 inner:12 idea:1 reduce:1 br:1 shift:4 actuality:1 six:1 whether:1 speech:2 programmable:2 useful:1 amount:2 visualized:1 category:1 reduced:1 generate:1 shifted:1 correctly:1 vol:2 group:3 four:1 photomicrograph:2 prevent:1 fraction:1 sum:3 letter:2 master:1 clipped:1 comparable:1 bit:7 layer:2 bound:2 comparision:1 software:1 lafranchise:3 speed:2 performing:3 martin:2 department:2 according:1 waibel:2 march:2 cun:4 modification:1 refered:1 restricted:2 taken:1 alluded:1 equation:1 previously:2 remains:1 serf:1 available:3 operation:2 eight:4 denker:1 appropriate:2 generic:1 original:3 chuang:6 denotes:1 top:3 ccd:13 ting:1 unchanged:2 digest:1 receptive:9 strategy:1 simulated:2 majority:1 seven:1 mail:1 extent:1 index:1 ql:1 holding:2 implementation:4 design:1 twenty:3 perform:2 allowing:2 recogniton:1 howard:1 supporting:1 hinton:1 required:4 connection:2 acoustic:1 learned:1 boser:2 trans:1 able:2 pattern:11 mdacs:2 built:2 memory:1 shifting:2 suitable:1 overlap:1 force:1 dissipates:1 technology:1 acknowledgement:1 multiplication:2 relative:1 fully:1 digital:3 translation:1 row:2 summary:1 free:1 allow:1 institute:1 template:8 curve:2 computes:1 forward:1 collection:1 replicated:2 preprocessing:1 san:2 implicitly:1 shikano:1 prod:2 ca:2 obtaining:1 complex:1 domain:1 noise:1 crgtr:1 fashion:1 precision:14 position:3 candidate:1 weighting:1 extractor:4 burden:1 adding:1 rejection:2 suited:2 simply:1 visual:1 corresponds:1 ma:2 viewed:1 shared:11 content:1 reducing:2 called:1 invariance:2 scan:1 dept:1 |
3,435 | 4,110 | An Inverse Power Method for Nonlinear
Eigenproblems with Applications in
1-Spectral Clustering and Sparse PCA
?
Matthias Hein
Thomas Buhler
Saarland University, Saarbr?ucken, Germany
{hein,tb}@cs.uni-saarland.de
Abstract
Many problems in machine learning and statistics can be formulated as (generalized) eigenproblems. In terms of the associated optimization problem, computing linear eigenvectors amounts to finding critical points of a quadratic function
subject to quadratic constraints. In this paper we show that a certain class of constrained optimization problems with nonquadratic objective and constraints can be
understood as nonlinear eigenproblems. We derive a generalization of the inverse
power method which is guaranteed to converge to a nonlinear eigenvector. We
apply the inverse power method to 1-spectral clustering and sparse PCA which
can naturally be formulated as nonlinear eigenproblems. In both applications we
achieve state-of-the-art results in terms of solution quality and runtime. Moving
beyond the standard eigenproblem should be useful also in many other applications and our inverse power method can be easily adapted to new problems.
1
Introduction
Eigenvalue problems associated to a symmetric and positive semi-definite matrix are quite abundant
in machine learning and statistics. However, considering the eigenproblem from a variational point
of view using Courant-Fischer-theory, the objective is a ratio of quadratic functions, which is quite
restrictive from a modeling perspective. We show in this paper that using a ratio of p-homogeneous
functions leads quite naturally to a nonlinear eigenvalue problem, associated to a certain nonlinear operator. Clearly, such a generalization is only interesting if certain properties of the standard
problem are preserved and efficient algorithms for the computation of nonlinear eigenvectors are
available. In this paper we present an efficient generalization of the inverse power method (IPM)
to nonlinear eigenvalue problems and study the relation to the standard problem. While our IPM
is a general purpose method, we show for two unsupervised learning problems that it can be easily
adapted to a particular application.
The first application is spectral clustering [20]. In prior work [5] we proposed p-spectral clustering
based on the graph p-Laplacian, a nonlinear operator on graphs which reduces to the standard graph
Laplacian for p = 2. For p close to one, we obtained much better cuts than standard spectral clustering, at the cost of higher runtime. Using the new IPM, we efficiently compute eigenvectors of the
1-Laplacian for 1-spectral clustering. Similar to the recent work of [19], we improve considerably
compared to [5] both in terms of runtime and the achieved Cheeger cuts. However, opposed to the
suggested method in [19] our IPM is guaranteed to converge to an eigenvector of the 1-Laplacian.
The second application is sparse Principal Component Analysis (PCA). The motivation for sparse
PCA is that the largest PCA component is difficult to interpret as usually all components are nonzero.
In order to allow a direct interpretation it is therefore desirable to have only a few features with
nonzero components but which still explain most of the variance. This kind of trade-off has been
1
widely studied in recent years, see [15] and references therein. We show that also sparse PCA has a
natural formulation as a nonlinear eigenvalue problem and can be efficiently solved with the IPM.
All proofs had to be omitted due to space restrictions and can be found in the supplementary material.
2
Nonlinear Eigenproblems
The standard eigenproblem for a symmetric matric A ? Rn?n is of the form
Af ? ?f = 0,
(1)
n
where f ? R and ? ? R. It is a well-known result from linear algebra that for symmetric matrices
A, the eigenvectors of A can be characterized as critical points of the functional
FStandard (f ) =
hf, Af i
2
kf k2
.
(2)
The eigenvectors of A can be computed using the Courant-Fischer Min-Max principle. While the
ratio of quadratic functions is useful in several applications, it is a severe modeling restriction. This
restriction however can be overcome using nonlinear eigenproblems. In this paper we consider
functionals F of the form
R(f )
,
(3)
F (f ) =
S(f )
where with R+ = {x ? R | x ? 0} we assume R : Rn ? R+ , S : Rn ? R+ to be convex,
Lipschitz continuous, even and positively p-homogeneous1 with p ? 1. Moreover, we assume that
S(f ) = 0 if and only if f = 0. The condition that R and S are p-homogeneous and even will imply
for any eigenvector v that also ?v for ? ? R is an eigenvector. It is easy to see that the functional of
the standard eigenvalue problem in Equation (2) is a special case of the general functional in (3).
To gain some intuition, let us first consider the case where R and S are differentiable. Then it holds
for every critical point f ? of F ,
?F (f ? ) = 0
??
?R(f ? ) ?
R(f ? )
? ?S(f ? ) = 0 .
S(f ? )
Let r, s : Rn ? Rn be the operators defined as r(f ) = ?R(f ), s(f ) = ?S(f ) and ?? =
we see that every critical point f ? of F satisfies the nonlinear eigenproblem
R(f ? )
S(f ? ) ,
r(f ? ) ? ?? s(f ? ) = 0,
(4)
which is in general a system of nonlinear equations, as r and s are nonlinear operators. If R and S
are both quadratic, r and s are linear operators and one gets back the standard eigenproblem (1).
Before we proceed to the general nondifferentiable case, we have to introduce some important concepts from nonsmooth analysis. Note that F is in general nonconvex and nondifferentiable. In the
following we denote by ?F (f ) the generalized gradient of F at f according to Clarke [9],
?F (f ) = {? ? Rn F 0 (f, v) ? h?, vi , for all v ? Rn },
(g)
where F 0 (f, v) = limg?f, t?0 sup F (g+tv)?F
. In the case where F is convex, ?F is the subdift
ferential of F and F 0 (f, v) the directional derivative for each v ? Rn . A characterization of critical
points of nonsmooth functionals is as follows.
Definition 2.1 ([7]) A point f ? Rn is a critical point of F , if 0 ? ?F .
This generalizes the well-known fact that the gradient of a differentiable function vanishes at a
critical point. We now show that the nonlinear eigenproblem (4) is a necessary condition for a
critical point and in some cases even sufficient. A useful tool is the generalized Euler identity.
Theorem 2.1 ([21]) Let R : Rn ? R be a positively p-homogeneous and convex continuous function. Then, for each x ? Rn and r? ? ?R(x) it holds that hx, r? i = p R(x).
1
A function G : Rn ? R is positively homogeneous of degree p if G(?x) = ? p G(x) for all ? ? 0.
2
The next theorem characterizes the relation between nonlinear eigenvectors and critical points of F .
Theorem 2.2 Suppose that R, S fulfill the stated conditions. Then a necessary condition for f ?
being a critical point of F is
0 ? ?R(f ? ) ? ?? ?S(f ? ),
where
?? =
R(f ? )
.
S(f ? )
(5)
If S is continuously differentiable at f ? , then this is also sufficient.
Finally, the definition of the associated nonlinear operators in the nonsmooth case is a bit tricky as
r and s can be set-valued. However, as we assume R and S to be Lipschitz, the set where R and S
are nondifferentiable has measure zero and thus r and s are single-valued almost everywhere.
3
The inverse power method for nonlinear Eigenproblems
A standard technique to obtain the smallest eigenvalue of a positive semi-definite symmetric matrix
A is the inverse power method [12]. Its main building block is the fact that the iterative scheme
Af k+1 = f k
(6)
converges to the smallest eigenvector of A. Transforming (6) into the optimization problem
f k+1 = arg min
u
1
hu, A ui ? u, f k
2
(7)
is the motivation for the general IPM. The direct generalization tries to solve
0 ? r(f k+1 ) ? s(f k )
or equivalently
f k+1 = arg min R(u) ? u, s(f k ) ,
(8)
u
where r(f ) ? ?R(f ) and s(f ) ? ?S(f ). For p > 1 this leads directly to Algorithm 2, however
for p = 1 the direct generalization fails. In particular, the ball constraint has to be introduced in
Algorithm 1 as the objective in the optimization problem (8) is otherwise unbounded from below.
(Note that the 2-norm is only chosen for algorithmic convenience). Moreover, the introduction of
?k in Algorithm 1 is necessary to guarantee descent whereas in Algorithm 2 it would just yield a
rescaled solution of the problem in the inner loop (called inner problem in the following).
For both methods we show convergence to a solution of (4), which by Theorem 2.2 is a necessary condition for a critical point of F and often also sufficient. Interestingly, both applications are
naturally formulated as 1-homogeneous problems so that we use in both cases Algorithm 1. Nevertheless, we state the second algorithm for completeness. Note that we cannot guarantee convergence
to the smallest eigenvector even though our experiments suggest that we often do so. However, as
the method is fast one can afford to run it multiple times with different initializations and use the
eigenvector with smallest eigenvalue.
Algorithm 1 Computing a nonlinear eigenvector for convex positively p-homogeneous functions R
and S with p = 1
1: Initialization: f 0 = random with
f 0
= 1, ?0 = F (f 0 )
2: repeat
3:
f k+1 = arg min R(u) ? ?k u, s(f k )
where s(f k ) ? ?S(f k )
kuk2 ?1
?k+1 = R(f k+1 )/S(f k+1 )
|?k+1 ??k |
5: until
<
?k
6: Output: eigenvalue ?k+1 and eigenvector f k+1 .
4:
The inner optimization problem is convex for both algorithms. In turns out that both for 1-spectral
clustering and sparse PCA the inner problem can be solved very efficiently, for sparse PCA it has
even a closed form solution. While we do not yet have results about convergence speed, empirical
observation shows that one usually converges quite quickly to an eigenvector.
3
Algorithm 2 Computing a nonlinear eigenvector for convex positively p-homogeneous functions R
and S with p > 1
1: Initialization: f 0 = random, ?0 = F (f 0 )
2: repeat
3:
g k+1 = arg min R(u) ? u, s(f k )
where s(f k ) ? ?S(f k )
u
f k+1 = g k+1 /S(g k+1 )1/p
?k+1 = R(f k+1 )/S(f k+1 )
|?k+1 ??k |
<
6: until
?k
7: Output: eigenvalue ?k+1 and eigenvector f k+1 .
4:
5:
To our best knowledge both suggested methods have not been considered before. In [4] they propose
an inverse power method specially tailored towards the continuous p-Laplacian for p > 1, which
can be seen as a special case of Algorithm 2. In [15] a generalized power method has been proposed
which will be discussed in Section 5. Finally, both methods can be easily adapted to compute the
largest nonlinear eigenvalue, which however we have to omit due to space constraints.
Lemma 3.1 The sequences f k produced by Alg. 1 and 2 satisfy F (f k ) > F (f k+1 ) for all k ? 0 or
the sequences terminate.
Theorem 3.1 The sequences
f k produced by Algorithms 1 and 2 converge to an eigenvector f ?
?
0
with eigenvalue ? ? 0, F (f ) in the sense that it solves the nonlinear eigenproblem (5). If S is
continuously differentiable at f ? , then F has a critical point at f ? .
Practical implementation: By the proof of Lemma 3.1, descent in F is not only guaranteed for
the optimal solution of the inner problem, but for any vector u which has inner objective value
1
?f k (u) < 0 = ?f k (f k ) for Alg. 1 and ?f k (u) < ?f k (F (f k ) 1?p f k ) in the case of Alg. 2. This
has two important practical implications. First, for the convergence of the IPM, it is sufficient to use
a vector u satisfying the above conditions instead of the optimal solution of the inner problem. In
particular, in an early stage where one is far away from the limit, it makes no sense to invest much
effort to solve the inner problem accurately. Second, if the inner problem is solved by a descent
method, a good initialization for the inner problem at step k + 1 is given by f k in the case of Alg. 1
1
and F (f k ) 1?p f k in the case of Alg. 2 as descent in F is guaranteed after one step.
4
Application 1: 1-spectral clustering and Cheeger cuts
Spectral clustering is a graph-based clustering method (see [20] for an overview) based on a relaxation of the NP-hard problem of finding the optimal balanced cut of an undirected graph. The
spectral relaxation has as its solution the second eigenvector of the graph Laplacian and the final partition is found by optimal thresholding. While usually spectral clustering is understood as relaxation
of the so called ratio/normalized cut, it can be equally seen as relaxation of the ratio/normalized
Cheeger cut, see [5]. Given a weighted undirected graph with vertex set V and weight matrix W ,
the ratio Cheeger cut (RCC) of a partition (C, C), where C ? V and C = V \C, is defined as
RCC(C, C) :=
cut(C, C)
,
min{|C| , C }
where
cut(A, B) =
X
wij ,
i?A,j?B
where we assume in the following that the graph is connected. Due to limited space the normalized
version is omitted, but the proposed IPM can be adapted to this case. In [5] we proposed p-spectral
clustering, a generalization of spectral clustering based on the second eigenvector of the nonlinear
graph p-Laplacian (the graph Laplacian is recovered for p = 2). The main motivation was the
relation between the optimal Cheeger cut hRCC = minC?V RCC(C, C) and the Cheeger cut h?RCC
obtained by optimal thresholding the second eigenvector of the p-Laplacian, see [5, 8],
p1
hRCC
h?RCC
hRCC
? p > 1,
?
? p
,
maxi?V di
maxi?V di
maxi?V di
4
P
where di = i?V wij denotes the degree of vertex i. While the inequality is quite loose for spectral
clustering (p = 2), it becomes tight for p ? 1. Indeed in [5] much better cuts than standard spectral
clustering were obtained, at the expense of higher runtime. In [19] the idea was taken up and they
considered directly the variational characterization of the ratio Cheeger cut, see also [8],
Pn
Pn
1
1
i,j=1 wij |fi ? fj |
i,j=1 wij |fi ? fj |
2
2
hRCC = minf nonconstant
= min f nonconstant
. (9)
kf ? median(f )1k1
kf k1
median(f)=0
In [19] they proposed a minimization scheme based on the Split Bregman method [11]. Their method
produces comparable cuts to the ones in [5], while being computationally much more efficient.
However, they could not provide any convergence guarantee about their method.
In this paper we consider the functional associated to the 1-Laplacian ?1 ,
Pn
1
hf, ?1 f i
i,j=1 wij |fi ? fj |
2
F1 (f ) =
=
,
kf k1
kf k1
(10)
where
(?1 f )i =
n
nX
o
wij uij | uij = ?uji , uij ? sign(fi ? fj ) and sign(x) =
j=1
(
?1,
[?1, 1],
1,
x < 0,
x = 0,
x > 0.
and study its associated nonlinear eigenproblem 0 ? ?1 f ? ? sign(f ).
Proposition 4.1 Any non-constant eigenvector f ? of the 1-Laplacian has median zero. Moreover,
let ?2 be the second eigenvalue of the 1-Laplacian, then if G is connected it holds ?2 = hRCC .
For the computation of the second eigenvector we have to modify the IPM which is discussed in the
next section.
4.1
Modification of the IPM for computing the second eigenvector of the 1-Laplacian
The direct minimization of (10) would be compatible with the IPM, but the global minimizer is the
first eigenvector which is constant. For computing the second eigenvector note that, unlike in the
case p = 2, we cannot simply project on the space orthogonal to the constant eigenvector, since
mutual orthogonality of the eigenvectors does not hold in the nonlinear case.
Algorithm 3 is a modification of Algorithm 1 which computes a nonconstant eigenvector of the 1k+1
k+1
Laplacian. The notation |f+
|, |f?
| and |f0k+1 | refers to the cardinality of positive, negative and
zero elements, respectively. Note that Algorithm 1 requires in each step the computation
ofsome
subgradient s(f k ) ? ?S(f k ), whereas in Algorithm 3 the subgradient v k has to satisfy v k , 1 = 0.
This condition ensures that the inner objective is invariant under addition of a constant and thus
not affected by the subtraction of the median. Opposite to [19] we can prove convergence to a
nonconstant eigenvector of the 1-Laplacian. However, we cannot guarantee convergence to the
second eigenvector. Thus we recommend to use multiple random initializations and use the result
which achieves the best ratio Cheeger cut.
Theorem 4.1 The sequence f k produced
by Algorithm
3 converges to an eigenvector f ? of the 1
Laplacian with eigenvalue ?? ? hRCC , F1 (f 0 ) . Furthermore, F1 (f k ) > F1 (f k+1 ) for all k ? 0
or the sequence terminates.
4.2
Quality guarantee for 1-spectral clustering
Even though we cannot guarantee that we obtain the optimal ratio Cheeger cut, we can guarantee
that 1-spectral clustering always leads to a ratio Cheeger cut at least as good as the one found by
standard spectral clustering. Let (Cf? , Cf? ) be the partition of V obtained by optimal thresholding of
f , where Cf? = arg mint RCC(Cft , Cft ), and for t ? R, Cft = {i ? V | fi > t}. Furthermore, 1C
denotes the vector which is 1 on C and 0 else.
Lemma 4.1 Let C, C be a partitioning of the vertex set V , and assume that |C| ? C . Then for
any vector f ? Rn of the form f = ?1C , where ? ? R, it holds that F1 (f ) = RCC(C, C).
5
Algorithm 3 Computing a nonconstant 1-eigenvector of the graph 1-Laplacian
1: Input: weight matrix W
2: Initialization: nonconstant f 0 with median(f 0 ) = 0 and
f 0
1 = 1, accuracy
3: repeat
n P
o
n
4:
g k+1 = arg min 21 i,j=1 wij |fi ? fj | ? ?k f, v k
kf k22 ?1
5:
f k+1 = g k+1 ? median g k+1
(
sign(fik+1 ),
if fik+1 6= 0,
k+1
k+1
k+1
6:
vi
=
,
|f+
|?|f?
|
?
, if fik+1 = 0.
|f k+1 |
0
?k+1 = F1 (f k+1 )
|?k+1 ??k |
<
8: until
?k
7:
Lemma 4.2 Let f ? Rn with median(f ) = 0, and C = arg min{|Cf? |, |Cf? |}. Then the vector
f ? = 1C satisfies F1 (f ) ? F1 (f ? ).
Theorem 4.2 Let u denote the second eigenvector of the standard graph Laplacian, and f denote
1
the result of Algorithm 3 after initializing with the vector |C|
1C , where C = arg min{|Cu? |, |Cu? |}.
Then RCC(Cu? , Cu? ) ? RCC(Cf? , Cf? ).
4.3
Solution of the inner problem
The inner problem is convex, thus a solution can be computed by any standard method for solving
convex nonsmooth programs, e.g. subgradient methods [3]. However, in this particular case we can
exploit the structure of the problem and use the equivalent dual formulation of the inner problem.
Lemma
4.3 Let E ? V ? V denote the set of edges and A : RE ? RV be defined as (A?)i =
P
j | (i,j)?E wij ?ij . The inner problem is equivalent to
2
min{??RE | k?k? ?1, ?ij =??ji } ?(?) :=
A? ? F (f k )v k
2 .
Pn
2
The Lipschitz constant of the gradient of ? is upper bounded by 2 maxr s=1 wrs
.
Compared to the primal problem, the objective of the dual problem is smooth. Moreover, it can be
efficiently solved using FISTA ([2]), a two-step subgradient method with guaranteed convergence
rate O( k12 ) where k is the number of steps. The only input of FISTA is an upper bound on the Lipschitz constant of the gradient of the objective. FISTA provides a good solution in a few steps which
guarantees descent in functional (9) and thus makes the modified IPM very fast. The implementation
can be found in the supplementary material.
5
Application 2: Sparse PCA
Principal Component Analysis (PCA) is a standard technique for dimensionality reduction and data
analysis [13]. PCA finds the k-dimensional subspace of maximal variance in the data. For k = 1,
given a data matrix X ? Rn?p where each column has mean 0, in PCA one computes
f, X T Xf
?
,
(11)
f = arg max
2
kf k2
f ?Rp
where the maximizer f ? is the largest eigenvector of the covariance matrix ? = X T X ? Rp?p .
The interpretation of the PCA component f ? is difficult as usually all components are nonzero. In
sparse PCA one wants to get a small number of features which still capture most of the variance.
For instance, in the case of gene expression data one would like the principal components to consist
only of a few significant genes, making it easy to interpret by a human. Thus one needs to enforce
sparsity of the PCA component, which yields a trade-off between explained variance and sparsity.
6
While standard PCA leads to an eigenproblem, adding a constraint on the cardinality, i.e. the number of nonzero coefficients, makes the problem NP-hard. The first approaches performed simple
thresholding of the principal components which was shown to be misleading [6]. Since then several
methods have been proposed, mainly based on penalizing the L1 norm of the principal components,
including SCoTLASS [14] and SPCA [22]. D?Aspremont et al.[10] focused on the L0 -constrained
formulation and proposed a greedy algorithm to compute a full set of good candidate solutions up
to a specified target sparsity, and derived sufficient conditions for a vector to be globally optimal.
Moghaddam et al. [16] used branch and bound to compute optimal solutions for small problem
instances. Other approaches include D.C. [18] and EM-based methods [17]. Recently, Journee et al.
[15] proposed two single unit (computation of one component only) and two block (simultaneous
computation of multiple components) methods based on L0 -penalization and L1 -penalization.
Problem (11) is equivalent to
2
f ? = arg min
f ?Rp
kf k2
kf k2
= arg min
.
hf, ?f i
f ?Rp kXf k2
In order to enforce sparsity we use instead of the L2 -norm a convex combination of an L1 norm and
L2 norm in the enumerator, which yields the functional
F (f ) =
(1 ? ?) kf k2 + ? kf k1
,
kXf k2
(12)
with sparsity controlling parameter ? ? [0, 1]. Standard PCA is recovered for ? = 0, whereas ? = 1
yields the sparsest non-trivial solution: the component with the maximal variance. One easily sees
that the formulation (12) fits in our general framework, as both enumerator and denominator are
1-homogeneous functions. The inner problem of the IPM becomes
?f k
g k+1 = arg min (1 ? ?) kf k2 + ? kf k1 ? ?k f, ?k , where ?k = p
. (13)
hf k , ?f k i
kf k2 ?1
This problem has a closed form solution. In the following we use the notation x+ = max{0, x}.
Lemma 5.1 The convex optimization problem (13) has the analytical solution
r
Xn
1
k+1
k
k k
gi
= sign(?i ) ? ?i ? ? + , where s =
(?k |?ki | ? ?)2+ .
i=1
s
As s is just a scaling factor, we can omit it and obtain the simple and efficient scheme to compute
sparse principal components shown in Algorithm 4. While the derivation is quite different from
[15], the resulting algorithms are very similar. The subtle difference is that in our formulation the
thresholding parameter of the inner problem depends on the current eigenvalue estimate whereas it
is fixed in [15]. Empirically, this leads to the fact that we need slightly less iterations to converge.
Algorithm 4 Sparse PCA
1: Input: data matrix X, sparsity controlling parameter ?, accuracy
2: Initialization: f 0 = random with S(f k ) = 1, ?0 = F (f k )
3: repeat
4:
gik+1 = sign(?ki ) ?k ?ki ? ? + ,
g k+1
kXg k+1 k
2
5:
f k+1 =
6:
?k+1 = (1 ? ?)
f k+1
2 + ?
f k+1
1
k+1
?f
?k+1 = kXf
k+1 k
2
k+1
|? ??k |
8: until
<
?k
7:
6
Experiments
1-Spectral Clustering: We compare our IPM with the total variation (TV) based algorithm by
[19], p-spectral clustering with p = 1.1 [5] as well as standard spectral clustering with optimal
7
thresholding the second eigenvector of the graph Laplacian (p = 2). The graph and the two-moons
dataset is constructed as in [5]. The following table shows the average ratio Cheeger cut (RCC) and
error (classification as in [5]) for 100 draws of a two-moons dataset with 2000 points. In the case of
the IPM, we use the best result of 10 runs with random initializations and one run initialized with
the second eigenvector of the unnormalized graph Laplacian. For [19] we initialize once with the
second eigenvector of the normalized graph Laplacian as proposed in [19] and 10 times randomly.
IPM and the TV-based method yield similar results, slightly better than 1.1-spectral and clearly
outperforming standard spectral clustering. In terms of runtime, IPM and [19] are on the same level.
Avg. RCC
Avg. error
Inverse Power Method
0.0195 (? 0.0015)
0.0462 (? 0.0161)
Szlam & Bresson [19]
0.0195 (? 0.0015)
0.0491 (? 0.0181)
1.1-spectral [5]
0.0196 (? 0.0016)
0.0578 (? 0.0285)
Standard spectral
0.0247 (? 0.0016)
0.1685 (? 0.0200)
Figure 1: Left and middle: Second eigenvector of the 1-Laplacian and 2-Laplacian, respectively.
Right: Relative Variance (relative to maximal possible variance) versus number of non-zero components for the three datasets Lung2, GCM and Prostate1.
Next we perform unnormalized 1-spectral clustering on the full USPS and MNIST-datasets (9298
resp. 70000 points). As clustering criterion we use the multicut version of RCut, given as
K
X
cut(Ci , Ci )
.
RCut(C1 , . . . , CK ) =
|Ci |
i=1
We successively subdivide clusters until the desired number of clusters (K = 10) is reached. In each
substep the eigenvector obtained on the subgraph is thresholded such that the multi-cut criterion is
minimized. This recursive partitioning scheme is used for all methods. As in the previous experiment, we perform one run initialized with the thresholded second eigenvector of the unnormalized
graph Laplacian in the case of the IPM and with the second eigenvector of the normalized graph
Laplacian in the case of [19]. In both cases we add 100 runs with random initializations. The next
table shows the obtained RCut and errors.
MNIST
USPS
Rcut
Error
Rcut
Error
Inverse Power Method
0.1507
0.1244
0.6661
0.1349
S.&B. [19]
0.1545
0.1318
0.6663
0.1309
1.1-spectral [5]
0.1529
0.1293
0.6676
0.1308
Standard spectral
0.2252
0.1883
0.8180
0.1686
Again the three nonlinear eigenvector methods clearly outperform standard spectral clustering. Note
that our method requires additional effort (100 runs) but we get better results. For both datasets our
method achieves the best RCut. However, if one wants to do only a single run, by Theorem 4.2
for bi-partitions one achieves a cut at least as good as the one of standard spectral clustering if one
initializes with the thresholded 2nd eigenvector of the 2-Laplacian.
Sparse PCA: We evaluate our IPM for sparse PCA on gene expression datasets obtained from
[1]. We compare with two recent algorithms: the L1 based single-unit power algorithm of [15]
as well as the EM-based algorithm in [17]. For all considered datasets, the three methods achieve
very similar performance in terms of the tradeoff between explained variance and sparsity of the
solution, see Fig.1 (Right). In fact the results are so similar that for each dataset, the plots of all
three methods coincide in one line. In [15] it also has been observed that the best state-of-the-art
algorithms produce the same trade-off curve if one uses the same initialization strategy.
Acknowledgments: This work has been supported by the Excellence Cluster on Multimodal Computing and Interaction at Saarland University.
8
References
[1] http://www.stat.ucla.edu/?wxl/research/microarray/DBC/index.htm.
[2] A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image denoising
and deblurring problems. IEEE Transactions on Image Processing, 18(11):2419?2434, 2009.
[3] D.P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[4] R.J. Biezuner, G. Ercole, and E.M. Martins. Computing the first eigenvalue of the p-Laplacian via the
inverse power method. Journal of Functional Analysis, 257:243?270, 2009.
[5] T. B?uhler and M. Hein. Spectral Clustering based on the graph p-Laplacian. In Proceedings of the 26th
International Conference on Machine Learning, pages 81?88. Omnipress, 2009.
[6] J. Cadima and I.T. Jolliffe. Loading and correlations in the interpretation of principal components. Journal
of Applied Statistics, 22:203?214, 1995.
[7] K.-C. Chang. Variational methods for non-differentiable functionals and their applications to partial
differential equations. Journal of Mathematical Analysis and Applications, 80:102?129, 1981.
[8] F.R.K. Chung. Spectral Graph Theory. AMS, 1997.
[9] F.H. Clarke. Optimization and Nonsmooth Analysis. Wiley New York, 1983.
[10] A. d?Aspremont, F. Bach, and L. El Ghaoui. Optimal solutions for sparse principal component analysis.
Journal of Machine Learning Research, 9:1269?1294, 2008.
[11] T. Goldstein and S. Osher. The Split Bregman method for L1-Regularized Problems. SIAM Journal on
Imaging Sciences, 2(2):323?343, 2009.
[12] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins University Press, 3rd edition, 1996.
[13] I.T. Jolliffe. Principal Component Analysis. Springer, 2nd edition, 2002.
[14] I.T. Jolliffe, N. Trendafilov, and M. Uddin. A modified principal component technique based on the
LASSO. Journal of Computational and Graphical Statistics, 12:531?547, 2003.
[15] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre. Generalized Power Method for Sparse Principal
Component Analysis. Journal of Machine Learning Research, 11:517?553, 2010.
[16] B. Moghaddam, Y. Weiss, and S. Avidan. Spectral bounds for sparse PCA: Exact and greedy algorithms.
In Advances in Neural Information Processing Systems, pages 915?922. MIT Press, 2006.
[17] C.D. Sigg and J.M. Buhmann. Expectation-maximization for sparse and non-negative PCA. In Proceedings of the 25th International Conference on Machine Learning, pages 960?967. ACM, 2008.
[18] B.K. Sriperumbudur, D.A. Torres, and G.R.G. Lanckriet. Sparse eigen methods by D.C. programming.
In Proceedings of the 24th International Conference on Machine Learning, pages 831?838. ACM, 2007.
[19] A. Szlam and X. Bresson. Total variation and Cheeger cuts. In Proceedings of the 27th International
Conference on Machine Learning, pages 1039?1046. Omnipress, 2010.
[20] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395?416, 2007.
[21] F. Yang and Z. Wei. Generalized Euler identity for subdifferentials of homogeneous functions and applications. Mathematical Analysis and Applications, 337:516?523, 2008.
[22] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and
Graphical Statistics, 15:265?286, 2006.
9
| 4110 |@word cu:4 middle:1 version:2 norm:5 loading:1 nd:2 hu:1 covariance:1 sepulchre:1 ipm:19 f0k:1 reduction:1 interestingly:1 recovered:2 current:1 yet:1 john:1 partition:4 plot:1 greedy:2 characterization:2 completeness:1 provides:1 unbounded:1 saarland:3 mathematical:2 constructed:1 direct:4 differential:1 prove:1 introduce:1 excellence:1 indeed:1 p1:1 multi:1 globally:1 ucken:1 considering:1 cardinality:2 becomes:2 project:1 moreover:4 notation:2 bounded:1 kind:1 eigenvector:38 finding:2 guarantee:8 every:2 runtime:5 k2:9 tricky:1 partitioning:2 unit:2 szlam:2 omit:2 rcc:11 bertsekas:1 positive:3 before:2 understood:2 modify:1 limit:1 therein:1 studied:1 initialization:10 limited:1 bi:1 practical:2 acknowledgment:1 recursive:1 block:2 definite:2 empirical:1 refers:1 suggest:1 get:3 convenience:1 close:1 cannot:4 operator:6 restriction:3 equivalent:3 www:1 convex:10 focused:1 fik:3 variation:3 resp:1 target:1 suppose:1 controlling:2 exact:1 programming:2 homogeneous:9 us:1 deblurring:1 lanckriet:1 element:1 satisfying:1 cut:22 observed:1 solved:4 initializing:1 capture:1 ensures:1 connected:2 richt:1 trade:3 rescaled:1 balanced:1 cheeger:12 intuition:1 vanishes:1 transforming:1 ui:1 nesterov:1 tight:1 solving:1 algebra:1 usps:2 easily:4 multimodal:1 htm:1 derivation:1 fast:3 quite:6 widely:1 supplementary:2 valued:2 solve:2 otherwise:1 statistic:6 fischer:2 gi:1 final:1 sequence:5 eigenvalue:15 differentiable:5 matthias:1 analytical:1 propose:1 interaction:1 maximal:3 loop:1 subgraph:1 achieve:2 invest:1 convergence:8 cluster:3 produce:2 converges:3 derive:1 stat:1 ij:2 solves:1 c:1 human:1 material:2 hx:1 f1:8 generalization:6 proposition:1 hold:5 considered:3 algorithmic:1 achieves:3 early:1 smallest:4 omitted:2 purpose:1 largest:3 tool:1 weighted:1 minimization:2 enumerator:2 clearly:3 mit:1 always:1 arik:1 modified:2 fulfill:1 ck:1 pn:4 minc:1 l0:2 derived:1 mainly:1 sense:2 am:1 el:1 relation:3 journ:1 uij:3 wij:8 germany:1 arg:12 dual:2 classification:1 constrained:3 art:2 special:2 mutual:1 initialize:1 once:1 eigenproblem:9 unsupervised:1 uddin:1 minf:1 minimized:1 nonsmooth:5 np:2 recommend:1 few:3 randomly:1 beck:1 uhler:1 severe:1 golub:1 primal:1 implication:1 bregman:2 edge:1 moghaddam:2 partial:1 necessary:4 orthogonal:1 limg:1 abundant:1 re:2 initialized:2 hein:3 desired:1 instance:2 column:1 modeling:2 teboulle:1 bresson:2 maximization:1 cost:1 vertex:3 euler:2 considerably:1 international:4 siam:1 off:3 hopkins:1 continuously:2 quickly:1 again:1 von:1 successively:1 opposed:1 multicut:1 derivative:1 chung:1 de:1 coefficient:1 satisfy:2 vi:2 depends:1 performed:1 view:1 try:1 closed:2 characterizes:1 sup:1 wrs:1 hf:4 reached:1 kxg:1 accuracy:2 moon:2 variance:8 efficiently:4 yield:5 directional:1 accurately:1 produced:3 explain:1 simultaneous:1 definition:2 sriperumbudur:1 naturally:3 associated:6 proof:2 di:4 gain:1 dataset:3 knowledge:1 dimensionality:1 subtle:1 goldstein:1 back:1 higher:2 courant:2 wei:2 formulation:5 though:2 furthermore:2 just:2 stage:1 until:5 correlation:1 gcm:1 nonlinear:28 maximizer:1 quality:2 scientific:1 building:1 k22:1 concept:1 normalized:5 dbc:1 subdifferentials:1 symmetric:4 nonzero:4 unnormalized:3 criterion:2 generalized:6 l1:5 fj:5 omnipress:2 image:2 variational:3 fi:6 recently:1 functional:7 ji:1 overview:1 empirically:1 discussed:2 interpretation:3 interpret:2 significant:1 scotlass:1 rd:1 had:1 moving:1 add:1 recent:3 perspective:1 mint:1 certain:3 nonconvex:1 inequality:1 outperforming:1 seen:2 additional:1 subtraction:1 converge:4 semi:2 rv:1 multiple:3 desirable:1 full:2 reduces:1 branch:1 smooth:1 xf:1 characterized:1 af:3 bach:1 equally:1 laplacian:28 avidan:1 denominator:1 expectation:1 iteration:1 tailored:1 achieved:1 c1:1 preserved:1 whereas:4 addition:1 want:2 else:1 median:7 microarray:1 specially:1 unlike:1 subject:1 undirected:2 ee:1 cadima:1 yang:1 spca:1 split:2 easy:2 fit:1 hastie:1 lasso:1 opposite:1 inner:17 idea:1 tradeoff:1 expression:2 pca:22 effort:2 nonquadratic:1 proceed:1 afford:1 york:1 useful:3 eigenvectors:7 eigenproblems:7 amount:1 http:1 outperform:1 tutorial:1 sign:6 tibshirani:1 affected:1 nevertheless:1 penalizing:1 thresholded:3 buhler:1 imaging:1 graph:20 relaxation:4 subgradient:4 year:1 run:7 inverse:11 everywhere:1 luxburg:1 almost:1 draw:1 clarke:2 scaling:1 comparable:1 bit:1 bound:3 ki:3 guaranteed:5 quadratic:5 adapted:4 constraint:5 orthogonality:1 ucla:1 speed:1 min:14 martin:1 tv:3 according:1 ball:1 combination:1 terminates:1 slightly:2 em:2 sigg:1 modification:2 making:1 osher:1 explained:2 invariant:1 ghaoui:1 taken:1 computationally:1 equation:3 turn:1 loose:1 jolliffe:3 available:1 generalizes:1 apply:1 away:1 spectral:34 enforce:2 subdivide:1 rp:4 eigen:1 thomas:1 denotes:2 clustering:28 cf:7 include:1 graphical:2 matric:1 exploit:1 restrictive:1 k1:6 objective:7 initializes:1 strategy:1 gradient:5 subspace:1 athena:1 nondifferentiable:3 nx:1 trivial:1 index:1 ratio:11 equivalently:1 difficult:2 expense:1 stated:1 negative:2 implementation:2 perform:2 upper:2 observation:1 datasets:5 descent:5 rn:15 introduced:1 specified:1 journee:1 saarbr:1 beyond:1 suggested:2 usually:4 below:1 sparsity:7 tb:1 program:1 max:3 including:1 power:14 critical:12 natural:1 regularized:1 buhmann:1 scheme:4 improve:1 misleading:1 imply:1 aspremont:2 nonconstant:6 prior:1 l2:2 kf:14 relative:2 interesting:1 versus:1 penalization:2 degree:2 sufficient:5 cft:3 principle:1 thresholding:6 rcut:6 compatible:1 repeat:4 supported:1 allow:1 sparse:19 k12:1 van:1 overcome:1 curve:1 xn:1 computes:2 avg:2 coincide:1 far:1 transaction:1 functionals:3 uni:1 gene:3 global:1 maxr:1 continuous:3 iterative:1 uji:1 table:2 terminate:1 alg:5 zou:1 main:2 motivation:3 edition:2 positively:5 fig:1 torres:1 wiley:1 fails:1 sparsest:1 candidate:1 kxf:3 theorem:8 kuk2:1 maxi:3 gik:1 consist:1 mnist:2 adding:1 ci:3 simply:1 chang:1 springer:1 trendafilov:1 minimizer:1 satisfies:2 acm:2 identity:2 formulated:3 towards:1 lipschitz:4 hard:2 fista:3 loan:1 denoising:1 principal:12 lemma:6 called:2 total:3 evaluate:1 |
3,436 | 4,111 | Large-Scale Matrix Factorization with Missing Data
under Additional Constraints
Kaushik Mitra ??
Department of Electrical and Computer Engineering and UMIACS
University of Maryland, College Park, MD 20742
[email protected]
Sameer Sheorey?
Toyota Technological Institute, Chicago
[email protected]
Rama Chellappa
Department of Electrical and Computer Engineering and UMIACS
University of Maryland, College Park, MD 20742
[email protected]
Abstract
Matrix factorization in the presence of missing data is at the core of many computer vision problems such as structure from motion (SfM), non-rigid SfM and
photometric stereo. We formulate the problem of matrix factorization with missing data as a low-rank semidefinite program (LRSDP) with the advantage that:
1) an efficient quasi-Newton implementation of the LRSDP enables us to solve
large-scale factorization problems, and 2) additional constraints such as orthonormality, required in orthographic SfM, can be directly incorporated in the new
formulation. Our empirical evaluations suggest that, under the conditions of matrix completion theory, the proposed algorithm finds the optimal solution, and also
requires fewer observations compared to the current state-of-the-art algorithms.
We further demonstrate the effectiveness of the proposed algorithm in solving the
affine SfM problem, non-rigid SfM and photometric stereo problems.
1 Introduction
Many computer vision problems such as SfM [26], non-rigid SfM [3] and photometric stereo [11]
can be formulated as a matrix factorization problem. In all these problems, the measured data are
observations of the elements of an m ? n measurement matrix M of known rank r. The objective
is to factorize this measurement matrix M into factors A and B of dimensions m ? r and n ? r,
respectively such that the error ||M ? AB T || is minimized. When all the elements of M are known,
and assuming that the elements are corrupted by Gaussian noise, the solution to this problem is given
by the singular value decomposition (SVD) of M . However, in most real applications many of the
elements of M will be missing and we need to solve a modified problem given by:
min ||W (M ? AB T )||2F + ?1 ||A||2F + ?2 ||B||2F
A,B
(1)
where is the Hadamard element-wise product, W is a weight matrix with zeroes at indices corresponding to the missing elements of M , and ||A||2F , ||B||2F are regularization terms which prevent
?
?
Partially supported by an ARO MURI on oppurtunistic sensing under the grant W911NF-09-1-0383.
Kaushik Mitra and Sameer Sheorey contributed equally to this work.
1
data overfitting. Matrix factorization with missing data is a difficult non-convex problem with no
known globally convergent algorithm. The damped Newton algorithm [4], a variant of Newton?s
method, is one of the most popular algorithms for solving this problem. However, this algorithm has
high computational complexity and memory requirements and so cannot be used for solving large
scale problems.
We formulate the matrix factorization with missing data problem as a LRSDP [6], which is essentially a rank constrained semidefinite programming problem (SDP) and was proposed to solve
large SDP in an efficient way. The advantages of formulating the matrix factorization problem as
a LRSDP problem are the following: 1) it inherits the efficiency of the LRSDP algorithm. The
LRSDP algorithm is based on a quasi-Newton method which has lower computational complexity
and memory requirements than that of Newton?s method, and so is ideally suited for solving large
scale problems. 2) Many additional constraints, such as the ortho-normality constraints for the orthographic SfM, can be easily incorporated into the LRSDP-based factorization formulation; this is
possible because of the flexible framework of the LRSDP (see section 2).
Prior Work Algorithms for matrix factorization in the presence of missing data can be broadly
divided into two main categories: initialization algorithms and iterative algorithms. Initialization
algorithms [26, 13, 10, 18, 25] generally minimize an algebraic or approximate cost of (1) and are
used for providing a good starting point for the iterative algorithms. Iterative algorithms are those
algorithms that directly minimize the cost function (1). Alternation algorithms [23, 28, 12, 1, 2, 14],
damped Newton algorithm [4] and our approach fall under this category. Alternation algorithms are
based on the fact that if one of the factors A or B is known, then there are closed form or numerical
solutions for the other factor. Though the alternation-based algorithms minimize the cost in each
iteration, they are essentially a coordinate descent approach and suffer from flatlining, requiring
an excessive number of iterations before convergence [4]. To solve this problem, damped Newton
and hybrid algorithms between damped Newton and alternation were proposed in [4]. Although
these algorithms give very good results, they cannot be used for solving large-scale problems because of their high computational complexity and memory requirements. Other algorithms based on
Newton?s method have been proposed in [7, 21], which also cannot be used for solving large-scale
problems.
The matrix factorization with missing data problem is closely related to the matrix completion problem [9]. The goal of matrix completion is to find a low-rank matrix which agrees with the observed
entries of the matrix M . Recently, many efficient algorithms have been proposed for solving this
problem [8, 17, 19, 16, 15, 20]. Some of them [16, 15, 20] are formulated as matrix factorization problems. However, we note that these algorithms, by themselves, can not handle additional
constraints. Matrix factorization also arises while solving the collaborative filtering problem. Collaborative filtering is the task of predicting the interests of a user by collecting taste information
from many users, for example in a movie recommendation system. In [24], collaborative filtering
is formulated as a matrix completion problem and solved using a semidefinite program. Later a
fast version, using conjugate gradient, was proposed in [22], but it also cannot handle additional
constraints.
2 Background: Low-rank semidefinite programming (LRSDP)
LRSDP was proposed in [6] to efficiently solve a large scale SDP [27]. In the following paragraphs,
we briefly define the SDP and LRSDP problems, and discuss the efficient algorithm used for solving
the LRSDP problem.
SDP is a subfield of convex optimization concerned with the optimization of a linear objective
function over the intersection of the cone of positive semidefinite matrices with an affine space. The
standard-form SDP is given by:
min C ? X subject to Ai ? X = bi ,
i = 1, . . . , k
X0
(2)
where C and Ai are n ? n real symmetric matrices, b is k-dimensional vector, and X is an n ? n
matrix variable, which is required to be symmetric and positive semidefinite, as indicated by the
constraint X 0. The operator ? denotes the
Pnproduct in the space of n ? n symmetric
Pninner
matrices defined as A ? B = trace(AT B) = i=1 j=1 Aij Bij . The most common algorithms
for solving (2) are the interior point methods [27]. However, these are second-order methods, which
2
need to store and factorize a large (and often dense) matrix and hence are not suitable for solving
large scale problems.
In LRSDP a change of variables is introduced as X = RR T , where R is a real, n ? r matrix with
r ? n. This has the advantage that it removes the non-linear constraint X 0, which is the most
challenging aspect of solving (2). However, this comes with the cost that the problem may no longer
be a convex problem. The LRSDP formulation is given by:
(Nr )
min C ? RRT subject to Ai ? RRT = bi ,
i = 1, . . . , k
(3)
Note that the LRSDP formulation depends on r; when r = n, (3) is equivalent to (2). But the
intention is to choose r as small as possible so as to reduce the number of variables, while the
problem remains equivalent to the original problem (2).
A non-linear optimization technique called the augmented Lagrangian method is used for solving
(3). The majority of the iterations in this algorithm involve the minimization of an augmented Lagrangian function with respect to the variable R which is done by a limited memory BFGS method.
BFGS, a quasi-Newton method, is much more efficient than Newton?s method both in terms of computations and memory requirement. The LRSDP algorithm further optimizes the computations and
storage requirements for sparse C and Ai matrices, which is true for problems of our interest. For
further details on the algorithm, see [6, 5].
3 Matrix factorization using LRSDP (MF-LRSDP)
In this section, we formulate the matrix factorization with missing data as an LRSDP problem. We
do this in the following stages: in section 3.1, we look at the noiseless case, that is, where the
measurement matrix M is not corrupted with noise, followed by the noisy measurement case in
section 3.2, and finally in section 3.3, we look at how additional constraints can be incorporated in
the LRSDP formulation.
3.1 Noiseless Case
When the observed elements of the m ? n dimensional measurement matrix M are not corrupted
with noise, a meaningful cost to minimize would be:
min ||A||2F + ||B||2F subject to (AB T )i,j = Mi,j for (i, j) ? ?
A,B
(4)
where ? is the index set of the observed entries of M , and A, B are the desired factor matrices of
dimensions m ? r and n ? r respectively.
Toformulate this as a LRSDP problem, we introduce a
A
. Then
(m + n) ? r dimensional matrix R =
B
AAT AB T
T
(5)
RR =
BAT BB T
We observe that the cost function ||A||2F +||B||2F can be expressed as trace(RRT ) and the constraints
as (RRT )i,j+m = Mi,j . Thus, (4) is equivalent to:
min trace(RRT ) subject to (RRT )i,j+m = Mi,j for (i, j) ? ?
R
(6)
This is already in the LRSDP form, since we can express the above equation as
min C ? RRT subject to Al ? RRT = bl ,
R
l = 1, . . . , |?|
(7)
where C is an (m + n) ? (m + n) identity matrix, and to simplify the notations we have introduced
the index l with ?(l) = (i, j) l = 1, . . . , |?|. Al are sparse matrices with the non-zero entries at
indices (i, j + m) and (j + m, i) equal to 1/2 and bl = Mi,j . This completes the formulation of
the matrix factorization problem as an LRSDP problem for the noiseless case. Next we look at the
noisy case.
3
3.2 Noisy case
When the observed entries of M are corrupted with noise, an appropriate cost function to minimize
would be:
min ||W (M ? AB T )||2F + ?||A||2F + ?||B||2F
(8)
A,B
where is the Hadamard element-wise product and W is a weight matrix with zeros corresponding
to the missing entries and 1 to the observed entries in M . To formulate this as an LRSDP problem,
we introduce noise variables el , l = 1, 2, . . . , |?| which are defined as el = (M ? (AB T ))l . Now,
(8) can be expressed as
min ||e||22 + ?||A||2F + ?||B||2F subject to (M ? AB T )l = el for l = 1, 2, . . . , |?|
A,B,e
(9)
Next, we aim to formulate this as a LRSDP problem. For this, we construct an augmented noise
vector E = [eT 1]T and define R to be
?
?
A
0 ?
B
R=?
(10)
0
E
R is a ?block-diagonal? matrix, where the blocks are of sizes (m + n) ? r and (|?| + 1) ? 1
respectively. With this definition, RRT is a block-diagonal matrix given by
?
?
AAT AB T
0
?
RRT = ?
(11)
BAT BB T
T
0
EE
We can now express (8) in the following LRSDP form:
min C ? RRT subject to Al ? RRT = bl ,
R
with
C=
?I(m+n)?(m+n)
0
l = 1, . . . , |?| + 1
(12)
(13)
0
I(|?|+1)?(|?|+1)
Note that the number of constraints |?| + 1 in (12) is one more than the number of observations |?|.
This is because the last constraint is used to set E|?|+1 = 1, which is done by choosing A|?|+1 to be
a sparse matrix with the non-zero entry at index (|?| + l + m + n, |?| + 1 + m + n) equal to 1 and
b|?|+1 = 1. For the remaining values of l, the Al are sparse matrices with the non-zero entries at
indices (i, j +m), (j +m, i), (|?|+1+m+n, l+m+n) and (l+m+n, |?|+1+m+n) equal to 1/2
and bl = Ml . Note that (12) is a block-LRSDP problem (R has a block-diagonal structure), which
is a simple extension of the original LRSDP problem [5]. This completes the LRSDP formulation
for the noisy case. Next, we look at incorporating additional constraints in this framework.
3.3 Enforcing Additional Constraints
Many additional constraints can be easily incorporated in the LRSDP formulation. We illustrate
this using the specific example of orthographic SfM [26]. SfM is the problem of reconstructing the
scene structure (3-D point positions and camera parameters) from 2-D projections of the points in
the cameras. Suppose that m/2 cameras are looking at n 3-D points, then under the affine camera
model, the 2-D imaged points can be arranged as an m ? n measurement matrix M with columns
corresponding to the n 3-D points and rows corresponding to the m/2 cameras (2 consecutive rows
per camera) [26]. Under this arrangement, M can be factorized as M = AB T , where A is a m ? 4
camera matrix and B is a n ? 4 structure matrix with the last column of B, an all-one vector.
Thus, M is a rank 4 matrix with a special structure for the last column of B. Further, under the
orthographic camera model, A has more structure (constraints): pair of ?rows? that corresponds to
the same camera is ortho-normal. To state this constraints precisely, we decompose the A matrix as
A = [P t] where P is a m ? 3 sub-matrix consisting of the first three columns and t is the last
column vector. We can now express the camera ortho-normality constraint through the P P T matrix,
whose diagonal elements should be 1 (normality constraint) and appropriate off-diagonal elements
should be 0 (orthogonality constraint). Since, the last column of B is the all one vector, we can write
4
B = [X 1], where X is a n ? 3 matrix. Thus, AB T = P X + t1T and the observation error can
be expressed as el = (M ? P X)l ? ti for ?(l) = (i, j). A meaningful optimization problem to
solve here would be to minimize the observation error subject to the ortho-normality constraints:
min ||e||22 subject to el = (M ? P X)l ? ti , l = 1, 2, . . . , |?|
e,P,X,t
(P P T )k,k = 1,
k = 1, 2, . . . , m
(P P )k,l = 0, if k and l are rows from same camera
T
(14)
To formulate this as an LRSDP problem, we introduce the augmented translation variable T =
[tT 1]T , and propose the following block-diagonal matrix R:
?
?
P
0 0
X
?
?
R=?
(15)
?
0
T 0
0
0 E
With this definition of R, we can express (14) as a LRSDP problem; following steps similar to the
previous sections, it is should be straight forward to figure out the appropriate C and A l matrices
required in this LRSDP formulation (3). This completes our illustration on the incorporation of
the ortho-normality constraints for the orthographic SfM case. This example should convince the
reader that many other application-specific constraints can be directly incorporated into the LRSDP
formulation; this is because of the underlying SDP structure of the LRSDP.
4 Matrix Completion, Uniqueness and Convergence of MF-LRSDP
In this section, we state the main result of the matrix completion theory and discuss its implications
for the matrix factorization problem.
4.1 Matrix Completion Theory
Matrix completion theory considers the problem of recovering a low-rank matrix from a few samples
of its entries:
min rank(X) subject to Xi,j = Mi,j for (i, j) ? ?
(16)
X
More specifically, it considers the following questions: 1) when does a partially observed matrix
have a unique low-rank solution? 2) How can this matrix be recovered? The answers to these
questions were provided in theorem 1.3 of [9] which states that if 1) the matrix M , that we want to
recover, has row and columns spaces incoherent with the standard basis and 2) we are given enough
entries (? O(rd6/5 log d), where d = max(m, n)), then there exists a unique low-rank solution to
(16). Further, the solution can be obtained by solving a convex relaxation of (16) given by:
min ||X||? subject to Xi,j = Mi,j for (i, j) ? ?
(17)
X
where ||X||? is the nuclear norm of X, given by the sum of its singular values.
4.2 Relation with Matrix Factorization and its Implications
In matrix completion the objective is to find a minimum rank matrix which agrees with the partial
observations (16), whereas in matrix factorization we assume the rank r to be known, as in the
problems of SFM and photometric stereo, and we use the rank as a constraint. For example, in our
LRSDP formulation, we have imposed this rank constraint by fixing the number of columns of the
factors A and B to r. However, though the matrix completion and factorization problems are defined differently, they are closely related as revealed by their very similar Lagrangian formulations.
This fact has been used in solving the matrix completion problem via matrix factorization with an
appropriate rank [16, 15, 20]. We should also note that matrix completion theory helps us answer the
question raised in [4]: when is missing data matrix factorization unique (up to a gauge)? And from
the discussion in the previous section, it should be clear that the conditions of the matrix completion
theory are sufficient for guaranteeing us the required uniqueness. Further, in our experimental evaluations (see next section), we have found that the LRSDP formulation, though a non-convex problem
in general, converges to the global minimum solution under these conditions.
5
5 Experimental Evaluation
We evaluate the performance of the proposed LRSDP-based factorization algorithm (MF-LRSDP)
on both synthetic and real data and compare it against other algorithms such as alternation [4],
damped Newton [4] and OptSpace [15], which is one of state-of-the-art algorithms for matrix completion.
5.1 Evaluation with Synthetic Data
The important parameters in the matrix factorization with missing data problem are: the size of
the matrix M characterized by m and n, rank r, fraction of missing data and the variance ? 2 of
the observation noise. We evaluate the factorization algorithms by varying these parameters. We
consider two cases: data without noise and data with noise. For synthetic data without noise, we
generate n ? n matrices M of rank r by M = AB T , where A and B are n ? r random matrices with
each entry being sampled independently from a standard Gaussian distribution N (0, 1). Each entry
is then revealed randomly according to the missing data fraction. For synthetic data with noise, we
add independent Gaussian noise N (0, ? 2 ) to the observed entries generated as above.
Exact Factorization: a first comparison. We study the reconstruction rate of different algorithms
by varying the fraction of revealed entries per column (|?|/n) for noiseless 500 ? 500 matrices of
? F /||M ||F ? 10?4 , where M
? = A?B
? is
rank 5. We declare a matrix to be reconstructed if ||M ? M||
the reconstructed matrix and ||.||F denotes the Frobenius norm. Reconstruction rate is defined as the
fraction of trials for which the matrix was successfully reconstructed. In all the synthetic data experiments, we performed 10 trials. Figure 1(a) shows the reconstruction rate by MF-LRSDP, alternation
and OptSpace. MF-LRSDP gives the best reconstruction results as it needs fewer observations for
matrix reconstruction than the other algorithms. It is followed by OptSpace and alternation, respectively. MF-LRSDP also takes the least time, followed by OptSpace and alternation. For similar
comparison to other matrix completion algorithms such as ADMiRA [16], SVT [8] and FPCA [17],
the interested reader can look at [15], where OptSpace was shown to be consistently better than
these algorithms. For the remaining experiments on synthetic data, we mostly compare MF-LRSDP
against OptSpace. Note that we have not included the damped Newton algorithm in this comparison
because it is very slow for matrices of this size.
0.8
MF?LRSDP
Alternation
OptSpace
Time in seconds in log scale
Reconstruction rate
1
0.6
0.4
0.2
0
0
MF?LRSDP
4
10
Alternation
OptSpace
2
10
0
10
?2
10
20
|?|/n
30
10
40
(a) Reconstruction rate
0
10
20
|?|/n
30
40
(b) Timing results
Figure 1: (a) Reconstruction rate vs. fraction of revealed entries per column |?|/n for 500 ? 500 matrices
of rank 5 by MF-LRSDP, alternation and OptSpace. The proposed algorithm MF-LRSDP gives the best reconstruction results since it can reconstruct matrices with fewer observed entries. (b) Time taken for reconstruction
by different algorithms. MF-LRSDP takes the least time.
Exact Factorization: vary size. We study the reconstruction rate vs. fraction of revealed entries per
column |?|/n for different sizes n of rank 5 square matrices by MF-LRSDP and OptSpace. Figure
2(a) shows that MF-LRSDP reconstructs matrices from fewer observed entries than OptSpace.
Exact Factorization: vary rank. We study the reconstruction rate vs. |?|/n as we vary the rank r
of 500 ? 500 matrices. Figure 2(b) again shows that MF-LRSDP gives better results than OptSpace.
6
2.5
OptSpace
r=5
OptSpace
n=1000
0.6
n=400
n=2000
n=1000
0.4
n=2000
0.2
2
0.8
r=10
0.6
r=20
0.4
MF?LRSDP
r=5
RMSE
0.8
1
MF?LRSDP
n=400
Reconstruction rate
Reconstruction rate
1
r=10
1.5
MF?LRSDP
Alternation
Damped Newton
OptSpace
1
0.5
0.2
r=20
0
5
10
15
20
25
|?|/n
30
35
40
0
0
50
100
150
|?|/n
(a) Reconstruction rate for different (b) Reconstruction rate for differsizes
ent ranks
0
0
1
2
3
4
Noise standard deviation ?
5
(c) RMSE vs. noise std
Figure 2: (a) Reconstruction rate vs. fraction of revealed entries per column |?|/n for rank 5 square matrices
of different sizes n by MF-LRSDP and OptSpace. MF-LRSDP reconstructs matrices from fewer observed
entries than OptSpace. (b) Reconstruction rate vs. |?|/n for 500 ? 500 matrices of different ranks by MFLRSDP and OptSpace. Again MF-LRSDP needs fewer observations than OptSpace. (c) RMSE vs. noise
standard deviation for rank 5, 200 ? 200 matrices by MF-LRSDP, OptSpace, alternation and damped Newton.
All algorithms perform equally well.
Noisy Factorization: vary noise standard deviation. For noisy data, we use the root mean
?
? ||F as a performance measure. We vary the standard
square error RMSE = 1/ mn||M ? M
deviation ? of the additive noise for rank 5, 200 ? 200 matrices and study the performance by
MF-LRSDP, OptSpace, alternation and damped Newton. Figure 2(c) shows that all the algorithms
perform equally well.
For timing comparisons, please refer to the supplementary material.
5.2 Evaluation with Real Data
We consider three problems: 1) affine SfM 2) non-rigid SfM and 3) photometric stereo.
Affine SfM. As discussed in section 3.3, for affine SfM, the m ? n measurement matrix M is a rank
4 matrix with the last column of matrix B an all-one vector. M is generally an incomplete matrix
because not all the points are visible in all the cameras. We evaluate the performance of MF-LRSDP
on the ?Dinosaur? sequence used in [4, 7], for which M is a 72 ? 319 matrix with 72% missing
entries. We perform 25 trials and at each trial we provide the same random initializations to MFLRSDP, alternation and damped Newton (OptSpace has its only initialization
p technique). We use
? F / |?|, as our performance
the root mean square error over the observed entries, ||W (M ? M)||
measure. Figure 3 shows the cumulative histogram over the RMS pixel error. MF-LRSDP gives
the best performance followed by damped Newton, alternation and OptSpace. We further tested the
algorithms on a ?longer Dinosaur?, the result of which is provided in the supplementary material.
Non-rigid SfM. In non-rigid SfM, non-rigid objects are expressed as a linear combination of b basis
shapes. In this case, the m ? n measurement matrix M can be expressed as M = AB T , where A
is an m ? 3b matrix and B is an n ? 3b matrix [3]. This makes M a rank 3b matrix. We test the
performance of the algorithms on the ?Giraffe? sequence [4, 7] for which M is a 240 ? 167 matrix
with 30% missing entries. We choose the rank as 6. Figure 3 shows the cumulative histogram of 25
trials from which we conclude that MF-LRSDP, alternation and damped Newton give good results.
Photometric Stereo. Photometric stereo is the problem of estimating the surface normals of an
object by imaging that object under different lighting conditions. Suppose we have n images of
the object under different lighting conditions with each image consisting of m pixels (m surface
normals) and we arrange them as an m ? n measurement matrix M . Then under Lambertian assumptions, we can express M as M = AB T , where A is an m ? 3 matrix representing the surface
normals and reflectance and B is an n ? 3 matrix representing the light-source directions and intensities [11]. Thus, M is a rank 3 matrix. Some of the image pixels are likely to be affected by
shadows and specularities and those pixels should not be included in the M matrix as they do not
obey the Lambertian assumption. This makes M , an incomplete matrix. We test the algorithms
on the ?Face? sequence [4, 7] for which M is a 2944 ? 20 matrix with 42% missing entries. The
cumulative histogram in figure 3 shows that MF-LRSDP and damped Newton gives the best results
followed by alternation and OptSpace.
7
20
20
15
MF?LRSDP
Alternation
Damped Newton
OptSpace
10
5
0
2
4
6
RMS pixel error
8
25
MF?LRSDP
Alternation
Damped Newton
OptSpace
Cumulative histogram
25
Cumulative histogram
Cumulative histogram
25
15
10
5
0
0
(a) Dinosaur sequence
0.2
0.4
0.6
0.8
RMS pixel error
(b) Giraffe sequence
1
20
MF?LRSDP
Alternation
damped Newton
OptSpace
15
10
5
0
0.035
0.04
0.045
RMS error
0.05
0.055
(c) Face sequence
Figure 3: Cumulative histogram (of 25 trials) for the Dinosaur, Giraffe and the Face sequence. For all of them,
MF-LRSDP consistently gives good results.
Additional constraints: Orthographic SfM. Orthographic SfM is a special case of affine SfM,
where the camera matrix A satisfies the additional constraint of ortho-normality, see section 3.3. We
show here that incorporating these constraints leads to a better solution. Figure 4 shows the input
point tracks, reconstructed point tracks without the constraints and reconstructed point tracks with
the constraints for the Dinosaur turntable sequence. Without the constraints many tracks fail to be
circular, whereas with the constraints all of them are circular (the dinosaur sequence is a turntable
sequence and the tracks are supposed to be circular). Thus, incorporating all the constraints of a
problem leads to better solution and MR-LRSDP provides a very flexible framework for doing so.
(a) Input point tracks
(b) Reconstructed tracks without (c) Reconstructed tracks with conconstraints
straints
Figure 4: (a) Input (incomplete) point tracks of the Dinosaur turntable sequence, (b) reconstructed tracks
without orthonormality constraints and (c) reconstructed tracks with orthonormality contraints. Without the
constraints many tracks fail to be circular, whereas with the constraints all of them are circular (the dinosaur
sequence is a turntable sequence and the tracks are supposed to be circular).
6 Conclusion and Discussion
We have formulated the matrix factorization with missing data problem as a low-rank semidefinite
programming problem MF-LRSDP. MF-LRSDP is an efficient algorithm that can be used for solving large-scale factorization problems. It is also flexible for handling many additional constraints
such as the ortho-normality constraints of orthographic SfM. Our empirical evaluations on synthetic
data show that it needs fewer observations for matrix factorization as compared to other algorithms
and it gives very good results on the real problems of SfM, non-rigid SfM and photometric stereo.
We note that though MF-LRSDP is a non-convex problem, it finds the global minimum under the
conditions of matrix completion theory. As a future work, it would be interesting to find a theoretical justification for this. Also, it would be interesting to find out how MF-LRSDP performs on
collaborative filtering problems.
References
? om, and J. M. Carstensen. Robust factorization. IEEE TPAMI, 2002.
[1] H. Aan?s, R. Fisker, K. Astr?
8
[2] S. Brandt. Closed-form solutions for affine reconstruction under missing data. In Stat. Methods for Video
Proc. (ECCV 02 Workshop), 2002.
[3] C. Bregler, A. Hertzmann, and H. Biermann. Recovering non-rigid 3d shape from image streams. In
CVPR, 2000.
[4] A. M. Buchanan and A. W. Fitzgibbon. Damped newton algorithms for matrix factorization with missing
data. In CVPR, 2005.
[5] S. Burer and C. Choi. Computational enhancements in low-rank semidefinite programming. Optimization
Methods and Software, 2006.
[6] S. Burer and R.D.C. Monteiro. A nonlinear programming algorithm for solving semidefinite programs
via low-rank factorization. Mathematical Programming (series B, 2001.
[7] Pei C. Optimization algorithms on subspaces: Revisiting missing data problem in low-rank matrix. IJCV,
2008.
[8] J. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
Journal on Optimization, 2010.
[9] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations on Computational Mathematics, 2009.
[10] N. Guilbert, A.E. Bartoli, and A. Heyden. Affine approximation for direct batch recovery of euclidian
structure and motion from sparse data. IJCV, 2006.
[11] H. Hayakawa. Photometric stereo under a light source with arbitrary motion. JOSA, 1994.
[12] D. Q. Huynh, R. Hartley, and A. Heyden. Outlier correction in image sequences for the affine camera. In
ICCV, 2003.
[13] D. W. Jacobs. Linear fitting with missing data for structure-from-motion. CVIU, 2001.
[14] Q. Ke and T. Kanade. Robust l1 norm factorization in the presence of outliers and missing data by
alternative convex programming. In CVPR, 2005.
[15] R. H. Keshavan and S. Oh. A gradient descent algorithm on the grassman manifold for matrix completion.
CoRR, abs/0910.5260, 2009.
[16] K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation. CoRR,
abs/0905.0044, 2009.
[17] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, 2009.
[18] D. Martinec and T. Pajdla. 3d reconstruction by fitting low-rank matrices with missing data. In CVPR,
2005.
[19] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. http://www-stat.stanford.edu/ hastie/Papers/SVD JMLR.pdf, 2009.
[20] R. Meka, P. Jain, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. CoRR,
abs/0909.5457, 2009.
[21] T. Okatani and K. Deguchi. On the wiberg algorithm for matrix factorization in the presence of missing
components. IJCV, 2007.
[22] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction.
In ICML, 2005.
[23] H. Shum, K. Ikeuchi, and R. Reddy. Principal component analysis with missing data and its application
to polyhedral object modeling. IEEE TPAMI, 1995.
[24] N. Srebro, J. D. M. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. In NIPS, 2004.
[25] J. P. Tardif, A. Bartoli, M. Trudeau, N. Guilbert, and S. Roy. Algorithms for batch matrix factorization
with application to structure-from-motion. In CVPR, 2007.
[26] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization
method. IJCV, 1992.
[27] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Rev., 1996.
[28] R. Vidal and R. Hartley. Motion segmentation with missing data using powerfactorization and gpca. In
In CVPR, 2004.
9
| 4111 |@word trial:6 briefly:1 version:1 norm:3 decomposition:2 jacob:1 euclidian:1 series:1 shum:1 current:1 recovered:1 numerical:1 additive:1 visible:1 chicago:1 shape:3 enables:1 remove:1 v:7 rrt:12 fewer:7 core:1 provides:1 gpca:1 brandt:1 mathematical:2 direct:1 ijcv:4 fitting:2 paragraph:1 buchanan:1 introduce:3 polyhedral:1 themselves:1 cand:2 sdp:7 globally:1 provided:2 estimating:1 notation:1 underlying:1 factorized:1 collecting:1 ti:2 fpca:1 grant:1 before:1 positive:2 engineering:2 mitra:2 aat:2 declare:1 svt:1 timing:2 initialization:4 challenging:1 factorization:41 limited:1 bi:2 bat:2 unique:3 camera:14 atomic:1 orthographic:8 block:6 fitzgibbon:1 empirical:2 projection:2 boyd:1 intention:1 suggest:1 cannot:4 interior:1 operator:1 storage:1 www:1 equivalent:3 imposed:1 lagrangian:3 missing:28 starting:1 independently:1 convex:8 formulate:7 shen:1 ke:1 recovery:1 nuclear:1 vandenberghe:1 oh:1 handle:2 ortho:7 coordinate:1 justification:1 suppose:2 user:2 exact:4 programming:9 element:10 roy:1 std:1 muri:1 observed:11 powerfactorization:1 electrical:2 solved:1 revisiting:1 technological:1 complexity:3 hertzmann:1 ideally:1 solving:17 efficiency:1 basis:2 easily:2 differently:1 jain:1 fast:2 chellappa:1 choosing:1 whose:1 supplementary:2 solve:6 cvpr:6 stanford:1 rennie:2 reconstruct:1 noisy:6 advantage:3 rr:2 sequence:14 tpami:2 cai:1 aro:1 propose:1 reconstruction:20 product:2 hadamard:2 supposed:2 frobenius:1 ent:1 convergence:2 enhancement:1 requirement:5 guaranteeing:1 converges:1 rama:2 help:1 illustrate:1 object:5 completion:19 stat:2 fixing:1 measured:1 recovering:2 shadow:1 come:1 direction:1 closely:2 hartley:2 material:2 decompose:1 admira:2 bregler:1 extension:1 correction:1 normal:4 vary:5 consecutive:1 arrange:1 uniqueness:2 proc:1 agrees:2 gauge:1 successfully:1 wiberg:1 minimization:3 gaussian:3 aim:1 modified:1 varying:2 jaakkola:1 inherits:1 consistently:2 rank:39 rigid:9 el:5 relation:1 quasi:3 interested:1 pixel:6 monteiro:1 bartoli:2 flexible:3 art:2 constrained:1 special:2 raised:1 equal:3 construct:1 park:2 look:5 icml:1 excessive:1 photometric:9 minimized:1 future:1 simplify:1 few:1 randomly:1 consisting:2 ab:16 interest:2 circular:6 evaluation:6 semidefinite:10 light:2 damped:17 implication:2 bregman:1 partial:1 incomplete:4 desired:1 theoretical:1 column:13 modeling:1 w911nf:1 optspace:27 cost:7 deviation:4 entry:24 answer:2 corrupted:4 synthetic:7 convince:1 recht:1 siam:2 lee:1 off:1 again:2 aan:1 reconstructs:2 choose:2 bfgs:2 depends:1 stream:2 later:1 performed:1 root:2 closed:2 doing:1 recover:1 rmse:4 collaborative:5 minimize:6 square:4 om:1 variance:1 efficiently:1 lighting:2 straight:1 definition:2 against:2 mi:6 josa:1 sampled:1 popular:1 segmentation:1 formulation:13 done:2 though:4 arranged:1 stage:1 keshavan:1 nonlinear:1 indicated:1 requiring:1 orthonormality:3 true:1 regularization:2 hence:1 symmetric:3 imaged:1 goldfarb:1 dhillon:1 kaushik:2 please:1 huynh:1 pdf:1 tt:1 demonstrate:1 performs:1 motion:7 l1:1 image:6 wise:2 recently:1 common:1 discussed:1 measurement:9 refer:1 ai:4 meka:1 mathematics:1 astr:1 longer:2 surface:3 add:1 optimizes:1 store:1 alternation:21 minimum:4 additional:12 mr:1 sameer:2 characterized:1 burer:2 divided:1 equally:3 prediction:1 variant:1 vision:2 essentially:2 noiseless:4 iteration:3 histogram:7 orthography:1 background:1 want:1 whereas:3 completes:3 singular:4 source:2 umiacs:3 umd:2 subject:11 effectiveness:1 ikeuchi:1 ee:1 presence:4 revealed:6 enough:1 concerned:1 hastie:2 reduce:1 rms:4 stereo:9 suffer:1 algebraic:1 generally:2 clear:1 involve:1 turntable:4 category:2 generate:1 http:1 per:5 track:13 tibshirani:1 broadly:1 write:1 affected:1 express:5 prevent:1 imaging:1 relaxation:1 fraction:7 cone:1 sum:1 reader:2 t1t:1 sfm:24 followed:5 guaranteed:1 convergent:1 constraint:38 precisely:1 orthogonality:1 incorporation:1 scene:1 software:1 aspect:1 min:12 formulating:1 department:2 according:1 combination:1 conjugate:1 reconstructing:1 rev:1 outlier:2 iccv:1 taken:1 equation:1 reddy:1 remains:1 discus:2 fail:2 vidal:1 observe:1 lambertian:2 obey:1 appropriate:4 spectral:1 martinec:1 batch:2 alternative:1 original:2 denotes:2 remaining:2 straints:1 newton:24 reflectance:1 bl:4 objective:3 already:1 arrangement:1 question:3 md:2 nr:1 diagonal:6 gradient:2 subspace:1 maryland:2 majority:1 manifold:1 considers:2 enforcing:1 assuming:1 index:6 illustration:1 providing:1 difficult:1 mostly:1 pajdla:1 trace:3 implementation:1 contraints:1 pei:1 contributed:1 perform:3 observation:10 descent:2 incorporated:5 looking:1 arbitrary:1 intensity:1 ttic:1 introduced:2 pair:1 required:4 tomasi:1 nip:1 program:3 max:1 memory:5 video:1 suitable:1 hybrid:1 predicting:1 mn:1 normality:7 representing:2 movie:1 incoherent:1 prior:1 taste:1 subfield:1 bresler:1 interesting:2 filtering:4 srebro:2 heyden:2 foundation:1 affine:10 sufficient:1 thresholding:1 translation:1 row:5 eccv:1 dinosaur:8 supported:1 last:6 aij:1 institute:1 fall:1 face:3 sparse:5 dimension:2 cumulative:7 forward:1 bb:2 reconstructed:9 approximate:1 ml:1 global:2 overfitting:1 conclude:1 factorize:2 xi:2 iterative:4 kanade:2 robust:2 mazumder:1 giraffe:3 main:2 dense:1 noise:17 augmented:4 slow:1 sub:1 position:1 jmlr:1 toyota:1 bij:1 theorem:1 choi:1 specific:2 sensing:1 incorporating:3 exists:1 workshop:1 corr:3 margin:2 cviu:1 chen:1 mf:35 suited:1 intersection:1 specularities:1 likely:1 expressed:5 partially:2 recommendation:1 corresponds:1 satisfies:1 ma:1 goal:1 formulated:4 identity:1 change:1 included:2 specifically:1 principal:1 called:1 svd:2 experimental:2 biermann:1 e:2 meaningful:2 college:2 grassman:1 arises:1 evaluate:3 tested:1 handling:1 |
3,437 | 4,112 | Feature Transitions with Saccadic Search:
Size, Color, and Orientation Are Not Alike
Stella X. Yu
Computer Science Department
Boston College
Chestnut Hill, MA 02467
[email protected]
Abstract
Size, color, and orientation have long been considered elementary features whose
attributes are extracted in parallel and available to guide the deployment of attention. If each is processed in the same fashion with simply a different set of local
detectors, one would expect similar search behaviours on localizing an equivalent
flickering change among identically laid out disks. We analyze feature transitions
associated with saccadic search and find out that size, color, and orientation are not
alike in dynamic attribute processing over time. The Markovian feature transition
is attractive for size, repulsive for color, and largely reversible for orientation.
1
Introduction
Size, color, and orientation have long been considered elementary features [14] that are available to
guide attention and visual search [17]. Their special status in early visual processing is supported
by a large volume of psychophysical evidence on how they can mediate effortless texture segregation, recombine in illusory conjunctions, and pop out in feature search [13, 16]. There is also
physiological evidence on how these features could be extracted with separate sets of dedicated detectors working in parallel across the entire space [6]. Consequently, in schematic diagrams as well
as computational models on visual saliency [13, 3, 12], image segmentation [5], object recognition
[10, 4, 20], and scene classification [12, 11], it is routinely assumed that features at all scales, colors,
and orientations are processed and available simultaneously.
While size, color, and orientation are alike at parallel local detections across space, they may not be
alike at serial deployment of attention across time. We investigate this issue in a gaze-tracked visual
search experiment which often requires multiple saccades for the subject to locate the target (Fig. 1).
disk1
disk2
disk1
disk1
disk2
disk1
disk2
disk1
disk2
disk2
disk2
disk1
disk1
disk1
disk2
disk2
disk2
disk2
disk2
disk2
disk1
disk1
disk1
disk1
Figure 1: Two kinds of disks are uniformly randomly distributed in a fixed regular layout. Only one
disk changes its kind during a repeated flickering presentation. For the same size of change, does it
matter to visual search whether the two kinds of disks are rendered in size, color, or orientation?
1
localization
mouse click
detection
mouse click
flicker stimulus
120ms per frame
fixation
1000ms
Figure 2: Each trial goes through fixation, stimulus, detection, and localization stages. A fixation
dot is displayed for 1 second before the onset of the flicker stimulus, with disk image 1, blank, disk
image 2, blank repeatedly presented for 120ms each. The subject issues a mouse click as soon as he
detects the change, and the the last seen disk image remains on till he clicks the disk of change. A
blank screen is then displayed for 2 seconds before the start of next trial.
We present two kinds of disks in a fixed regular layout in a flicker paradigm, and the subject?s task
is to locate the only disk that changes its kind (Fig. 2). The paradigm induces change blindness,
where a large difference between two images becomes strikingly difficult to detect with a blank
in-between, even with repeated presentations [9, 2, 8, 18] . Without the blank, the change elicits a
singular motion signal which automatically draws the viewer?s attention to the location of change;
with the blank, the motion signal is disrupted and overwhelmed by those motion transients between
either image and the blank, effectively masking the location of change.
If the magnitude of change is comparable across feature dimensions, does it matter whether the disks
are rendered in size, color, or orientation? That is, does visual search vary according to whether the
same array of disks are: 1) small and large, 2) black and white, or 3) horizontal and vertical disks? If
size, color, and orientation are processed in the same fashion with dedicated local detectors operating
in parallel across space, then the detector responses are identical spatially at any time instance in the
3 scenarios. The question is whether the deployment of attention, i.e. deciding what disks to look at
next and how to look, depend on which filters produce these responses.
Note that our stimuli decouple the target of feature search from visual saliency in the space. Our
target is defined not by one of the attributes as done in static search displays [14, 17], but by the temporal change of the attribute. At any time instance, the attributes are uniformly random everywhere,
so the target cannot draw attention to itself, but has to be discovered with search. The effect of the
attribute itself on attention can thus be studied without the confounding factor of saliency.
The focus of this paper is on how the feature space is navigated with saccadic search. We formulate
a feature descriptor for each fixation, based on which we develop a Markovian feature transition
model for saccadic eye movements. Our model reveals that feature transition is attractive for size,
repulsive for color, and largely reversible for orientation, suggesting that size, color, and orientation
are not alike in dynamic attribute processing over time.
2
Gaze-Tracked Change Blindness Experiment
We investigate whether visual search for attribute change differs when the stimulus is rendered in
size, color, or orientation with the same layout. We establish in a separate experiment that the change
is equivalent among dimensions: Detection is equally fast and accurate for a change between two
attributes across dimensions and for a no-change within each dimension across two attributes.
2
flicker stimuli
size
identical
spatial layout
1
1
2
1
2
1st image
2 1 1 2
1 2 2 2
1 1 2 2
2 2 1 1
1
1
2
1
1
2
1
2
2nd image
2 1 1 2
1 2 2 2
1 1 2 2
2 2 1 1
1
1
2
2
color
2
1
orientation
2
1
2
Figure 3: Flicker stimuli are rendered in the same layout but separately in size, color, and orientation.
The 1st image contains 12 attribute-1 disks and 12 attribute-2 disks in a uniformly random spatial
distribution. The 2nd image is identical to the 1st image except that 1 disk changes its attribute. It
could be any of the 24 disks. The disk of change here is circled in both layout matrices.
Stimuli. There are 2 kinds of disks for each dimension. Size has 2 radii, 0.45? for small and 1.35?
for large. Color has 2 values, 0.3 for black and 0.7 for white on 0 ? 1 value scale. Orientation has 2
angles, 0? for horizontal and 90? for vertical, with disk radii 0.45? ?1.35? along two directions. Both
size and orientation stimuli are of black value 0.3. Color stimuli are of medium disk radius 0.9? .
The background is of neutral gray value 0.5. Here we restrict color to luminance only, as color hue
processing is uniquely foveal, which would greatly confound explanations for search behaviours.
The flicker stimuli for the 3 dimensions are rendered in an identical spatial layout. Each stimulus
involves a pair of 24-disk images which are identical except for one disk. These 24 disks are located
centrally on a regular 4 ? 6 grid, with an inter-disk distance of 5.4? , which is 4 times the maximal
radius a disk could assume. The 1st image of the stimulus consists of uniformly randomly distributed
12 attribute-1 disks and 12 attribute-2 disks. The 2nd image changes one of the 24 disks (Fig. 3).
Apparatus. The display extends 25.6? ? 34.1? at a viewing distance of 5 meters. Gaze data are
recorded with a Tobii x50 eye tracker at 50Hz sampling rate and 0.5? -0.7? accuracy. Two clocksynced 3.2GHz Dell Precision computers control the eye tracker and the stimulus presentation respectively. The eye tracker is calibrated at the beginning of each data recording session.
Procedure. Each trial begins with a fixation dot of radius 0.5? shown at the center of the display
for 1 second. The flicker stimulus, in the sequence of disk image 1, blank, disk image 2, and blank,
is then repeatedly presented for 120 ms each. Once the subject issues a mouse click to indicate his
detection of the change, the last disk image remains on till the location of change is clicked (Fig. 2).
There are 3 sets of random stimuli run in 3 sessions. Each session has 3 blocks of 24 trials each, one
trial for one change location and one block for one dimension. The trials are completely randomized
in a block, and the blocks are also randomized and balanced among the subjects.
The subject is told that two images differing in only one disk are presented repeatedly. His task is to
detect and localize the changing disk. He should issue a click as soon as he detects the change. The
flickering then stops at the last seen disk image, and he should click the disk of change.
Participants. A total of 24 naive subjects with normal or corrected-to-normal vision participated
after providing informed consent and were compensated with cash. 11, 8, and 5 subjects took part
in one, two and all three sessions respectively.
3
3
Performance Analysis
We evaluate the task performance on both the accuracy measured by the percentage of correct change
localizations and the reaction time measured from the flicker stimulus onset to the subject?s first
mouse click for indicating a detection. Fig. 4 shows that localizing an equivalent change among
identically laid out items yields significantly different performances in the 3 dimensions. It is fastest
and most accurate in size, less so in orientation, and least in color.
99
Figure 4: Change localization given an identical
layout is best (fastest and most accurate) in size,
worse in orientation, and worst in color. The
sample means and their standard errors of reaction times (x-axis) and accuracies (y-axis) are
indicated by the centers and radii of ellipses respectively. The differences are significant, with
one-way ANOVA results of F (2, 3021) = 10.43,
p = 3.1 ? 10?5 for accuracy and F (2, 3021) =
20.43, p = 1.5 ? 10?9 for reaction time. We treat
the data from all the subjects as samples from a
single subject population, since we are interested
not in individual subjects? performance, but in
the distinction between feature dimensions.
size
98
accuracy (%)
97
orientation
96
95
color
94
93
2.3
2.5
2.7
2.9
3.1
reaction time (seconds)
3.3
3.5
The human visual system must accomplish change localization by examining more than one disk
per flicker cycle, since the mean reaction time is only about 5, 6, and 7 cycles (0.12 ? 4 = 0.48
seconds per cycle) for size, orientation, and color respectively. If only one item is looked at and
ruled out per cycle, on average it would require fixating 50% of 24 disks till hitting the target disk,
i.e. in 12 flicker cycles. Our average of 6 cycles suggests that about 2 disks are examined per cycle.
When a disk is being fixated, all its 8 neighbouring disks are mostly out of fovea, since they are
either 5.4? or 7.7? apart. Some coarse information about neighbouring disks must be utilized in
each fixation. The neighbourhood effect on change localization is studied in Fig. 5 and Fig. 6.
commonly best
spatial layout
(100%, 1.4 s)
a: ( 100%, 1.36 s )
b: ( 100%, 1.52 s )
c: ( 100%, 1.20 s )
d: ( 83%, 11.98 s )
e: ( 78%, 4.18 s )
f: ( 78%, 5.88 s )
commonly worst
spatial layout
(80%, 7.3 s)
Figure 5: The common spatial layout that yields the best (a,b,c) or the worst (d,e,f) change localization performance in all 3 feature dimensions. Each pair of numbers (a%, b s) indicate mean
accuracy a and reaction time b. Shown here is the average image of a flicker stimulus, with the disk
of change taking two attributes, except in the case of color: Since the average has the same intensity
as background, the change is outlined in white instead. The commonly best layout has the change
among uniform attributes, whereas the commonly worst layout has a mixture of attributes.
4
size
dimension
color
orientation
( 100%, 3.08 s )
b: ( 100%, 2.09 s )
( 80%, 2.35 s )
( 94%, 3.37 s )
c: ( 78%, 4.08 s )
( 94%, 2.69 s )
( 94%, 3.37 s )
e: ( 78%, 4.08 s )
( 94%, 2.69 s )
( 100%, 3.08 s )
f: ( 100%, 2.09 s )
( 80%, 2.35 s )
dimensionspecific
best
spatial layout
a:
( 100%, 2.34 s )
( 83%, 5.44 s )
( 83%, 6.47 s )
dimensionspecific
worst
spatial layout
( 90%, 2.10 s )
d: ( 100%, 2.65 s )
( 100%, 2.59 s )
Figure 6: The dimension-specific spatial layout that yields the best or worst change localization
performance in one dimension only, with the largest performance gap over the other 2 dimensions.
Same convention as Fig. 5. The 3 rows of numbers below each image indicate the mean accuracy
and reaction time for a stimulus rendered in the same layout but in size, color, and orientation
respectively. The localization of a flickering change is easier in a primarily large neighbourhood for
size, in any homogeneous neighbourhood for color, and in a collinear neighbourhood for orientation.
Fig. 5 shows that a uniform neighbourhood tends to facilitate change localization, whereas a mixed
neighbourhood tends to hinder change localization, no matter which dimension the disks are rendered in. Fig. 6 shows distinctions in the neighbourhood uniformity between the 3 dimensions.
For size, change localization is easier in a neighbourhood populated with large disks. If the dominant size is large (Fig. 6a), missing a large would be easier to detect, whereas if the dominant size
is small (Fig. 6d), missing a small would be difficult to detect. That is, unlike color or orientation,
the attributes of size are asymmetrical: small produces a smaller response than large, with size 0 for
a response of 0 in the limiting case. When neither small nor large is dominant in the neighbourhood
(Fig. 5d), change localization becomes most difficult. For color, change localization is easier if one
color, either black or white, dominates the neighbourhood. For orientation, it is easier only if the
oriented disk is part of collinear layout.
4
Feature Analysis with Eye Movements
Having seen that neighbourhood uniformity has an impact on the change localization performance,
we investigate how it influences the decision on which item to look at in the next fixation.
We first associate a fixation with a set of f -numbers at that location, each measuring the overall
attribute density in a neighbourhood defined by a Gaussian spatial weighting function. Let loc(i)
denote the location of pixel i, dist(i, j) the distance between pixels i and j, and G(x; ?) the 1D
Gaussian function of x with mean 0 and standard deviation ?. We have:
(
0,
no disk at loc(i)
?1, disk type 1 at loc(i)
f0 (i) =
(1)
1, disk type 2 at loc(i)
P
j f0 (j)G(dist(i, j); ?)
f? (i) = P
(2)
j G(dist(i, j); ?)
5
f0
f1
f2
f4
Figure 7: The f -number images of a flicker stimulus. A negative f number (in blue shades) indicates
the dominance of attribute 1, whereas a positive f number (in red shades) indicates the dominance
of attribute 2. The closer the f number is to 0 (in gray shades), neither attribute dominates the neighbourhood. f? measures the average attribute in a Gaussian neighbourhood with standard deviation
?. The 3 circles on the target of change mark the ?, 2?, 3? radii. While ? = 1 covers only one disk
in isolation, ? = 2 also covers 8 adjacent disks, and ? = 4 covers 16 adjacent disks.
An f value of ?1, 1, 0 indicates the dominance of attribute 1, 2 or neither. With an increasing ?, f?
estimates the majority of attributes in a larger neighbourhood.
Each location is now associated with a set of f numbers, (f1 , f2 , . . .), and they as a whole capture the
attribute homogeneity surrounding that location. Fig. 7 shows f for the best spatial layout in Fig. 5.
At ? = 1, the neighbourhood could only contain one disk, thus f1 (i) = f0 (i) for most location i?s.
At ? = 2, it also contains 8 adjacent neighbours: f2 (i) = f1 (i) for i in a uniform neighbourhood,
and f2 (i) ? 0 for i in a mixed neighbourhood. At ? = 4, the neighbourhood is about the half size
of the display, with f4 (i) = 0 for i bordering two large different uniform neighbourhoods.
Fig. 8 shows the distributions of f associated with all the fixations. The two peaks of f1 in all the
3 feature dimensions demonstrate that visual search tends to fixate disks rather than empty spaces
between disks. There is also an attribute bias in each dimension, and the bias is weakest in orientation and strongest in size. This bias is not diminished in f2 , demonstrating that visual search
tends to navigate in groups of large disks. The single peak of f4 at value 0 not only confirms the
uniform randomness of our stimuli, but also reveals that the empty spaces being fixated tend to be
those borders between different attribute neighbourhoods at a coarser scale (Fig. 7 Column 4).
0.2
0.2
f1
f2
f4
color
0.1
0
0.2
f1
f2
f4
size
0.1
?1
?0.5
0
0.5
1
0
f1
f2
f4
orientation
0.1
?1
?0.5
0
0.5
1
0
?1
?0.5
0
0.5
1
Figure 8: The probability distribution of f1 , f2 , f4 (in increasing line widths) associated with all
the fixations shows a strong preference in size for large disks as well as areas of large disks (+1),
a small preference in color for black disks (?1), and a slight preference in orientation for vertical
disks (+1). The single peak of f4 at 0 reveals most fixations occurring near those disks separating
large groups of uniform attributes. These statistics are robust with respect to subject sub-sampling
validation, e.g. over 8 samplings of 10 subjects only, the maximal standard error is 0.006.
6
P?
size
color
orientation
P1
P2
P4
d?2
d ? [3, 9]
d ? 10
d?2
d ? [3, 9]
d ? 10
d?2
d ? [3, 9]
d ? 10
Figure 9: The probability distribution of f1 , f2 , f4 associated with all the saccades shows a preference of jumping to a disk of the same attribute regardless of saccade distance d and neighbourhood
size ?. Each transition P (a, b; d, ?) given d and ? is visualized as a 2D image, e.g. for size, the
right lower corner of P1 is the frequency of saccading from large to large. A darker gray indicates
a larger transitional probability. As d increases, it is more likely to jump to a different attribute, and
the chance is more uniformly random in orientation.
Fig. 9 shows the joint distributions of two f numbers associated with the initiating fixations and the
landing fixations of all the saccades, organized according to the saccade distance. For a saccade
from pixel i to pixel j, it contributes one count of transition from a to b in the f -space:
saccade
P? (a, b|d) = Prob(f? (i) = a, f? (j) = b, loc(i) ?? loc(j)| dist(i, j) = d)
(3)
Fig. 9 shows that all the transitions within 2 tend to cluster tightly along the diagonals, i.e. between
the same attributes. At such a short distance, each saccade could not reach a different disk. These
transitions are thus between the same disks or between the same inter-disk empty spaces, by e.g.
micro-saccades. The bias towards a particular attribute is also clear in each dimension: There are
more transitions between larges than between smalls, more between blacks than between whites,
about the same between horizontals and between verticals. As the saccade distance increases, disks
of various attributes become viable candidates to saccade to. It becomes more likely to saccade to
another disk of the same or different attribute than to saccade to an empty space (i.e. low probabilities in the middle rows or columns of P? images).
?
We further examine P? (a, b|d) at ? = 1 and d in the middle range of [2? , 10? ], associated with
saccades towards adjacent disks. We quantize f into 3 values based on a threshold ?: ?1 if f < ??,
0 if |f | < ?, and 1 if f > ?. The joint probability can be decomposed into marginal probability ?(a)
at the initiating attribute a and conditional probability P (b|a) for the landing attribute b:
!
X
P (a, b)
P (a, b) = ?(a) ? P (b|a) =
P (a, c) ? P
(4)
c P (a, c)
c
While ?(a) measures the proportion of fixations at attribute a among all the fixations, P (b|a) measures the proportion of saccades towards b given the current fixation at a. Consistent with Fig. 8,
?(a) in Fig. 10 shows more visits to large, black, vertical than to small, white, and horizontal.
The most interesting finding comes from P (b|a): While the attributes are uniformly random in the
neighbourhood, our eyes do not act like a blind space wanderer. 1) For size, it is much more likely to
visit large no matter what is being looked at in the current fixation, i.e., attribute large is an attractor
in the f space. 2) For color, it is more likely to visit black from either black or white, but not from an
empty space, i.e., if no disk is in fixation, it is more likely to visit white in the next fixation. Unlike
size, white is not an attractor, but a repeller: Once at white, the eyes are more inclined to leave for
black than staying in the group of whites. 3) For orientation, it is only slightly more likely to visit
vertical than horizontal. When the eyes are on an empty space, it is in fact equally likely to visit
horizontal or vertical in the next fixation, i.e., there is no strong attractor or repeller in orientation,
and the two attributes are largely reversible. Such biases also persist over larger saccades.
7
size
?(a) P (b|a)
color
?(a) P (b|a)
orientation
?(a) P (b|a)
.43
.40 .10 .50
.52
.53 .09 .38
.46
.44 .08 .48
.04
.36 .15 .49
.04
.33 .16 .51
.04
.44 .12 .44
.53
.37 .12 .51
.44
.48 .09 .43
.50
.43 .10 .47
Figure 10: The probability distribution of f1 for all the saccades within [2? , 10? ]. f1 is quantized into
?1, 0, 1, corresponding to attribute 1, empty space, and attribute 2 respectively, based on threshold
? = 0.15. These statistics are validated over 13 leave-50%-subjects-out samplings, with the standard
error for each number less than 0.01 except for the second row of P (b|a)(valued at 0.02, 0.01, 0.02
instead). Let a and b denote attributes, or row and column indices into the transition table. ?(a) is
the overall probability of looking at a. P (b|a) is the probability of saccading to b at a. For example,
for size, ? shows that 43% of all the fixations look at small, 4% at empty, and 53% at large, whereas
the 3rd row of P shows that upon fixating at large, there is 51% chance of saccading to another
large, 37% chance to a small disk and 12% chance to an empty space. The most likely action is
highlighted in red. Search in size tends to be attracted to large, search in color tends to be repelled
by white, whereas search in orientation is largely reversible between horizontal and vertical.
These results cannot be explained by visual crowding, where the perception of peripherally viewed
shapes is impaired with nearby similar shapes [7]. While critical spacing is always roughly half the
viewing eccentricity and independent of stimulus size, crowding magnitude differs across features:
Size crowding is almost as strong as orientation crowding, whereas the effect is much weaker for
color [15]. Therefore, feature crowding cannot explain the different natures of feature transitions for
size, color, and orientation, or why such biases persist over larger saccades.
5
Summary
Size, color, and orientation are considered elementary features extracted with separate sets of detectors responding in parallel across space. They are modeled by the same computational mechanism,
differing only in the filters that implement their local attribute detectors.
We conducted a gaze-tracked change blindness experiment, where the subject needs to locate a
flickering change among items rendered identically in space and separately in size, color, and orientation. If the deployment of attention during search depends only on the master spatial map of
responses [14, 13, 3, 17, 12], regardless of which type of filters produces them, we should observe
little differences in the search performance and behaviours among the 3 dimensions.
Our search performance analysis shows that change localization is fastest and most accurate in size,
less in orientation, worst in color. Change in a uniform neighbourhood is easier to localize, but only
if the attribute is large for size, or if the items form collinear extension for orientation.
Our feature analysis with eye movements shows that search in each dimension has an attribute bias:
large for size, black for color, and vertical for orientation, and a common spatial bias on border items
separating large uniform groups. However, feature transitions with saccades have a strong attractor
bias for large, and a repeller bias for white, and a very little bias for orientation.
These biases create an interesting dynamics in serial processing over time which could explain why
localization is most effective in size and worst in color. It is not due to their alike local detectors in
the space, but due to their own selectivity in grouping [8, 19, 1] over time: Focusing on the large
group essentially cuts down the search space by half, whereas excursion into the white group from
the primary black group only hurts the spatial efficiency of search.
Our results and analysis methods on these elementary features thus provide new insights into the
computation of visual saliency and task-specific visual features across dimensions and over time.
8
Acknowledgements
This research is funded by NSF CAREER IIS-0644204 and a Clare Boothe Luce Professorship.
I would like to thank Dimitri Lisin, Marcus Woods, Sebastian Skardal, Peter Sempolinski, David
Tolioupov, and Kyle Tierney for earlier discussions and excellent assistance with the experiments. I
am grateful for many insightful comments I have received from Jeremy Wolfe, Ronald Rensink, and
anonymous reviewers; their valuable suggestions have greatly improved the paper.
References
[1] G. Fuggetta, S. Lanfranchi, and G. Campana. Attention has memory: priming for the size of
the attentional focus. Spatial Vision, 22(2):147?59, 2009.
[2] J. Grimes. On the failure ot detect changes in scenes across saccades. 2, 1996.
[3] L. Itti and C. Koch. Computational modelling of visual attention. Nature Neuroscience, pages
194?203, 2001.
[4] D. G. Lowe. Distinctive image features from scale-invariant keypoints. 2003.
[5] J. Malik, S. Belongie, T. Leung, and J. Shi. Contour and texture analysis for image segmentation. International Journal of Computer Vision, 2001.
[6] J. H. R. Maunsell and W. T. Newsome. Visual processing in monkey extrastriate cortex. Annual
Review of Neuroscience, 10:363?401, 1987.
[7] D. G. Pelli, M. Palomares, and N. J. Majaj. Crowding is unlike ordinary masking: distinguishing feature integration from detection. Journal of Vision, 4(12):1136?69, 2004.
[8] R. Rensink. Visual search for change: A probe into the nature of attentional processing. Visual
Cognition, 7:345?76, 2000.
[9] R. A. Rensink, J. K. O?Regan, and J. J. Clark. Image flicker is as good as saccades in making
large scene changes invisible. 24, pages 26?8, 1995.
[10] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature
Neuroscience, 2(11):1019?25, 1999.
[11] T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization.
Proceedings of National Academy of Sciences, 104(15):6424?9, 2007.
[12] A. Torralba. Contextual influences on saliency. In L. Itti, G. Rees, and J. Tsotsos, editors,
Neurobiology of Attention, pages 586?93. Academic Press, 2004.
[13] A. Treisman. The perception of features and objects. In R. D. Wright, editor, Visual Attention.
Oxford University Press, 1998.
[14] A. Treisman and G. Gelade. A feature-integration theory of atttention. Cognitive Psychology,
12(1):97?136, 1980.
[15] R. van den Berg, J. B. T. M. Roerdink, and F. W. Cornelissen. On the generality of crowding:
visual crowding in size, saturation, and hue compared to orientation. Journal of Vision, 7(2):1?
11, 2007.
[16] J. M. Wolfe. Asymmetries in visual search: an introduction. Perception and Psychophysics,
63:381?9, 2001.
[17] J. M. Wolfe and T. S. Horowitz. What attributes guide the deployment of visual attention and
how do they do it? Nature Neuroscience, 5, 2004.
[18] J. M. Wolfe, A. Reinecke, and P. Brawn. Why don?t we see changes? the role of attentional
bottlenecks and limited visual memory. Visual Cognition, 19(4-8):749?80, 2006.
[19] Y. Yeshurun and M. Carrasco. The effects of transient attention on spatial resolution and the
size of the attentional cue. Perception and Psychophysics, 70(1):104?13, 2008.
[20] H. Zhang, A. C. Berg, M. Maire, and J. Malik. SVM-KNN: Discriminative nearest neighbor
classification for visual category recognition. In IEEE Conference on Computer Vision and
Pattern Recognition, pages 2126?36, 2006.
9
| 4112 |@word trial:6 blindness:3 middle:2 proportion:2 nd:3 disk:72 confirms:1 crowding:8 extrastriate:1 contains:2 foveal:1 loc:6 bc:1 reaction:7 blank:9 current:2 contextual:1 must:2 attracted:1 ronald:1 shape:2 half:3 cue:1 item:6 beginning:1 short:1 coarse:1 quantized:1 location:9 preference:4 zhang:1 dell:1 along:2 become:1 viable:1 consists:1 fixation:21 inter:2 rapid:1 roughly:1 p1:2 nor:1 dist:4 examine:1 detects:2 initiating:2 decomposed:1 automatically:1 little:2 increasing:2 becomes:3 begin:1 clicked:1 medium:1 what:3 kind:6 monkey:1 informed:1 differing:2 finding:1 temporal:1 act:1 control:1 maunsell:1 before:2 positive:1 local:5 treat:1 apparatus:1 tends:6 oxford:1 black:12 studied:2 examined:1 suggests:1 deployment:5 fastest:3 professorship:1 limited:1 range:1 block:4 implement:1 differs:2 procedure:1 maire:1 area:1 majaj:1 significantly:1 regular:3 cannot:3 effortless:1 influence:2 landing:2 equivalent:3 map:1 reviewer:1 shi:1 center:2 compensated:1 missing:2 layout:19 attention:14 go:1 regardless:2 formulate:1 resolution:1 insight:1 array:1 his:2 population:1 hurt:1 limiting:1 target:6 neighbouring:2 homogeneous:1 distinguishing:1 associate:1 wolfe:4 recognition:4 located:1 utilized:1 carrasco:1 cut:1 coarser:1 persist:2 role:1 capture:1 worst:8 inclined:1 cycle:7 movement:3 transitional:1 valuable:1 balanced:1 dynamic:3 hinder:1 depend:1 uniformity:2 grateful:1 recombine:1 localization:17 f2:10 upon:1 completely:1 efficiency:1 strikingly:1 distinctive:1 yeshurun:1 joint:2 routinely:1 various:1 surrounding:1 fast:1 effective:1 whose:1 larger:4 valued:1 statistic:2 knn:1 highlighted:1 itself:2 sequence:1 took:1 maximal:2 p4:1 consent:1 till:3 academy:1 empty:9 cluster:1 impaired:1 eccentricity:1 produce:3 categorization:1 asymmetry:1 leave:2 staying:1 object:3 develop:1 measured:2 nearest:1 received:1 p2:1 strong:4 involves:1 indicate:3 come:1 convention:1 direction:1 radius:7 correct:1 attribute:49 filter:3 f4:9 human:1 transient:2 viewing:2 require:1 behaviour:3 f1:12 anonymous:1 elementary:4 viewer:1 extension:1 tracker:3 considered:3 koch:1 normal:2 deciding:1 wright:1 cognition:2 vary:1 early:1 torralba:1 roerdink:1 largest:1 create:1 gaussian:3 always:1 rather:1 cash:1 conjunction:1 validated:1 focus:2 modelling:1 indicates:4 greatly:2 detect:5 am:1 leung:1 entire:1 interested:1 pixel:4 issue:4 among:8 orientation:44 classification:2 overall:2 spatial:16 special:1 integration:2 psychophysics:2 marginal:1 once:2 having:1 sampling:4 identical:6 yu:2 look:4 saccading:3 stimulus:22 micro:1 primarily:1 randomly:2 oriented:1 neighbour:1 simultaneously:1 tightly:1 homogeneity:1 individual:1 national:1 attractor:4 detection:7 investigate:3 mixture:1 grime:1 accurate:4 closer:1 poggio:2 jumping:1 ruled:1 circle:1 instance:2 column:3 earlier:1 markovian:2 cover:3 localizing:2 measuring:1 newsome:1 ordinary:1 deviation:2 neutral:1 uniform:8 examining:1 conducted:1 accomplish:1 calibrated:1 rees:1 st:4 disrupted:1 density:1 randomized:2 peak:3 international:1 told:1 gaze:4 treisman:2 mouse:5 recorded:1 worse:1 corner:1 cognitive:1 cornelissen:1 horowitz:1 itti:2 dimitri:1 clare:1 suggesting:1 fixating:2 jeremy:1 account:1 matter:4 onset:2 blind:1 depends:1 lowe:1 analyze:1 red:2 start:1 participant:1 parallel:5 masking:2 accuracy:7 descriptor:1 largely:4 yield:3 saliency:5 randomness:1 detector:7 strongest:1 reach:1 explain:2 sebastian:1 failure:1 frequency:1 fixate:1 associated:7 static:1 stop:1 illusory:1 color:43 segmentation:2 organized:1 focusing:1 response:5 improved:1 done:1 generality:1 stage:1 working:1 horizontal:7 bordering:1 reversible:4 gray:3 indicated:1 facilitate:1 effect:4 serre:1 contain:1 asymmetrical:1 spatially:1 white:14 attractive:2 adjacent:4 assistance:1 during:2 width:1 uniquely:1 m:4 hill:1 demonstrate:1 invisible:1 dedicated:2 motion:3 image:27 kyle:1 common:2 tracked:3 volume:1 he:5 slight:1 significant:1 rd:1 grid:1 outlined:1 session:4 populated:1 dot:2 funded:1 f0:4 cortex:2 operating:1 dominant:3 own:1 confounding:1 apart:1 scenario:1 selectivity:1 seen:3 paradigm:2 signal:2 ii:1 multiple:1 keypoints:1 academic:1 long:2 serial:2 equally:2 visit:6 ellipsis:1 schematic:1 impact:1 oliva:1 vision:6 essentially:1 background:2 whereas:8 separately:2 participated:1 spacing:1 diagram:1 singular:1 ot:1 unlike:3 comment:1 subject:16 hz:1 recording:1 tend:2 near:1 feedforward:1 identically:3 isolation:1 psychology:1 architecture:1 restrict:1 click:8 luce:1 bottleneck:1 whether:5 collinear:3 peter:1 repeatedly:3 action:1 clear:1 hue:2 induces:1 processed:3 visualized:1 category:1 percentage:1 flicker:13 nsf:1 neuroscience:4 per:5 blue:1 dominance:3 group:7 demonstrating:1 threshold:2 localize:2 navigated:1 changing:1 tierney:1 neither:3 anova:1 luminance:1 peripherally:1 tsotsos:1 wood:1 run:1 angle:1 everywhere:1 prob:1 master:1 extends:1 laid:2 almost:1 excursion:1 draw:2 decision:1 comparable:1 centrally:1 display:4 annual:1 scene:3 nearby:1 rendered:8 department:1 according:2 across:11 smaller:1 slightly:1 making:1 alike:6 explained:1 invariant:1 confound:1 den:1 segregation:1 remains:2 count:1 mechanism:1 repulsive:2 available:3 probe:1 observe:1 hierarchical:1 neighbourhood:24 responding:1 establish:1 psychophysical:1 malik:2 question:1 looked:2 saccadic:4 primary:1 diagonal:1 fovea:1 distance:7 separate:3 elicits:1 separating:2 thank:1 majority:1 attentional:4 marcus:1 index:1 modeled:1 providing:1 difficult:3 mostly:1 repelled:1 negative:1 vertical:9 riesenhuber:1 displayed:2 neurobiology:1 looking:1 locate:3 frame:1 discovered:1 intensity:1 david:1 pair:2 pelli:1 distinction:2 x50:1 pop:1 gelade:1 below:1 perception:4 pattern:1 saturation:1 memory:2 explanation:1 palomares:1 critical:1 eye:9 axis:2 stella:2 naive:1 review:1 circled:1 acknowledgement:1 meter:1 expect:1 mixed:2 interesting:2 suggestion:1 regan:1 clark:1 validation:1 consistent:1 editor:2 row:5 summary:1 supported:1 last:3 soon:2 guide:3 bias:12 weaker:1 neighbor:1 taking:1 distributed:2 ghz:1 van:1 dimension:22 transition:13 contour:1 commonly:4 jump:1 status:1 reveals:3 fixated:2 assumed:1 belongie:1 discriminative:1 don:1 search:27 why:3 table:1 nature:5 robust:1 career:1 contributes:1 quantize:1 excellent:1 priming:1 whole:1 border:2 mediate:1 repeated:2 fig:21 screen:1 fashion:2 darker:1 precision:1 sub:1 candidate:1 weighting:1 down:1 shade:3 specific:2 navigate:1 insightful:1 physiological:1 svm:1 evidence:2 dominates:2 weakest:1 grouping:1 effectively:1 texture:2 magnitude:2 overwhelmed:1 occurring:1 gap:1 easier:6 boston:1 simply:1 likely:8 visual:25 hitting:1 saccade:21 chance:4 extracted:3 ma:1 conditional:1 viewed:1 presentation:3 consequently:1 towards:3 flickering:5 change:51 diminished:1 except:4 uniformly:6 corrected:1 rensink:3 decouple:1 total:1 indicating:1 college:1 berg:2 mark:1 evaluate:1 |
3,438 | 4,113 | On the Convexity of Latent Social Network Inference
Jure Leskovec
Department of Computer Science
Stanford University
[email protected]
Seth A. Myers
Institute for Computational
and Mathematical Engineering
Stanford University
[email protected]
Abstract
In many real-world scenarios, it is nearly impossible to collect explicit social network data. In such cases, whole networks must be inferred from underlying observations. Here, we formulate the problem of inferring latent social networks
based on network diffusion or disease propagation data. We consider contagions
propagating over the edges of an unobserved social network, where we only observe the times when nodes became infected, but not who infected them. Given
such node infection times, we then identify the optimal network that best explains
the observed data. We present a maximum likelihood approach based on convex
programming with a l1 -like penalty term that encourages sparsity. Experiments
on real and synthetic data reveal that our method near-perfectly recovers the underlying network structure as well as the parameters of the contagion propagation
model. Moreover, our approach scales well as it can infer optimal networks of
thousands of nodes in a matter of minutes.
1 Introduction
Social network analysis has traditionally relied on self-reported data collected via interviews and
questionnaires [27]. As collecting such data is tedious and expensive, traditional social network
studies typically involved a very limited number of people (usually less than 100). The emergence
of large scale social computing applications has made massive social network data [16] available,
but there are important settings where network data is hard to obtain and thus the whole network
must thus be inferred from the data. For example, populations, like drug injection users or men who
have sex with men, are ?hidden? or ?hard-to-reach?. Collecting social networks of such populations
is near impossible, and thus whole networks have to be inferred from the observational data.
Even though inferring social networks has been attempted in the past, it usually assumes that the
pairwise interaction data is already available [5]. In this case, the problem of network inference
reduces to deciding whether to include the interaction between a pair of nodes as an edge in the underlying network. For example, inferring networks from pairwise interactions of cell-phone call [5]
or email [4, 13] records simply reduces down to selecting the right threshold ? such that an edge
(u, v) is included in the network if u and v interacted more than ? times in the dataset. Similarly,
inferring networks of interactions between proteins in a cell usually reduces to determining the right
threshold [9, 20].
We address the problem of inferring the structure of unobserved social networks in a much more
ambitious setting. We consider a diffusion process where a contagion (e.g., disease, information,
product adoption) spreads over the edges of the network, and all that we observe are the infection
times of nodes, but not who infected whom i.e. we do not observe the edges over which the contagion
spread. The goal then is to reconstruct the underlying social network along the edges of which the
contagion diffused.
1
We think of a diffusion on a network as a process where neighboring nodes switch states from inactive to active. The network over which activations propagate is usually unknown and unobserved.
Commonly, we only observe the times when particular nodes get ?infected? but we do not observe
who infected them. In case of information propagation, as bloggers discover new information, they
write about it without explicitly citing the source [15]. Thus, we only observe the time when a blog
gets ?infected? but not where it got infected from. Similarly, in disease spreading, we observe people getting sick without usually knowing who infected them [26]. And, in a viral marketing setting,
we observe people purchasing products or adopting particular behaviors without explicitly knowing
who was the influencer that caused the adoption or the purchase [11]. Thus, the question is, if we assume that the network is static over time, is it possible to reconstruct the unobserved social network
over which diffusions took place? What is the structure of such a network?
We develop convex programming based approach for inferring the latent social networks from diffusion data. We first formulate a generative probabilistic model of how, on a fixed hypothetical
network, contagions spread through the network. We then write down the likelihood of observed
diffusion data under a given network and diffusion model parameters. Through a series of steps we
show how to obtain a convex program with a l1 -like penalty term that encourages sparsity. We evaluate our approach on synthetic as well as real-world email and viral marketing datasets. Experiments
reveal that we can near-perfectly recover the underlying network structure as well as the parameters
of the propagation model. Moreover, our approach scales well since we can infer optimal networks
of a thousand nodes in a matter of minutes.
Further related work. There are several different lines of work connected to our research. First is
the network structure learning for estimating the dependency structure of directed graphical models [7] and probabilistic relational models [7]. However, these formulations are often intractable
and one has to reside to heuristic solutions. Recently, graphical Lasso methods [25, 21, 6, 19] for
static sparse graph estimation and extensions to time evolving graphical models [1, 8, 22] have been
proposed with lots of success. Our work here is similar in a sense that we ?regress? the infection
times of a target node on infection times of other nodes. Additionally, our work is also related to a
link prediction problem [12, 23, 18, 24] but different in a sense that this line of work assumes that
part of the network is already visible to us.
The work most closely related to ours, however, is [10], which also infers networks through cascade
data. The algorithm proposed (called NetInf) assumes that the weights of the edges in latent network
are homogeneous, i.e. all connected nodes in the network infect/influence their neighbors with the
same probability. When this assumption holds, the algorithm is very accurate and is computationally
feasible, but here we remove this assumption in order to address a more general problem. Furthermore, where [10] is an approximation algorithm, our approach guarantees optimality while easily
handling networks with thousands of nodes.
2 Problem Formulation and the Proposed Method
We now define the problem of inferring a latent social networks based on network diffusion data,
where we only observe identities of infected nodes. Thus, for each node we know the interval
during which the node was infected, whereas the source of each node?s infection is unknown. We
assume only that an infected node was previously infected by some other previously infected node
to which it is connected in the latent social network (which we are trying to infer). Our methodology can handle a wide class of information diffusion and epidemic models, like the independent
contagion model, the Susceptible?Infected (SI), Susceptible?Infected?Susceptible (SIS) or even the
Susceptible?Infected?Recovered (SIR) model [2]. We show that calculating the maximum likelihood estimator (MLE) of the latent network (under any of the above diffusion models) is equivalent
to a convex problem that can be efficiently solved.
Problem formulation: The cascade model. We start by first introducing the model of the diffusion
process. As the contagion spreads through the network, it leaves a trace that we call a cascade.
Assume a population of N nodes, and let A be the N ? N weighted adjacency matrix of the network
that is unobserved and that we aim to infer. Each entry (i, j) of A models the conditional probability
of infection transmission:
Aij = P (node i infects node j | node i is infected).
2
The temporal properties of most types of cascades, especially disease spread, are governed by a
transmission (or incubation) period. The transmission time model w(t) specifies how long it takes
for the infection to transmit from one node to another, and the recovery model r(t) models the time
of how long a node is infected before it recovers. Thus, whenever some node i, which was infected
at time ?i , infects another node j, the time separating two infection times is sampled from w(t), i.e.,
infection time of node j is ?j = ?i + t, where t is distributed by w(t). Similarly, the duration of each
node?s infection is sampled from r(t). Both w(t) and r(t) are general probability distributions with
strictly nonnegative support.
A cascade c is initiated by randomly selecting a node to become infected at time t = 0. Let ?i
denote the time of infection of node i. When node i becomes infected, it infects each of its neighbors
independently in the network, with probabilities governed by A. Specifically, if i becomes infected
and j is susceptible, then j will become infected with probability Aij . Once it has been determined
which of i?s neighbors will be infected, the infection time of each newly infected neighbor will
be the sum of ?i and an interval of time sampled from w(t). The transmission time for each new
infection is sampled independently from w(t).
Once a node becomes infected, depending on the model, different scenarios happen. In the SIS
model, node i will become susceptible to infection again at time ?i + ri . On the other hand, under
the SIR model, node i will recover and can never be infected again. Our work here mainly considers
the SI model, where nodes remain infected forever, i.e., it will never recover, ri = ?. It is important
to note, however, that our approach can handle all of these models with almost no modification to
the algorithm.
For each cascade c, we then observe the node infection times ?ic as well as the duration of infection,
but the source of each node?s infection remains hidden. The goal then is to, based on observed set
of cascade infection times D, infer the weighted adjacency matrix A, where Aij models the edge
transmission probability.
Maximum Likelihood Formulation. Let D be the set of observed cascades. For each cascade
c, let ?ic be the time of infection for node i. Note that if node i did not get infected in cascade c,
then ?ic = ?. Also, let Xc (t) denote the set of all nodes that are in an infected state at time t in
cascade c. We know the infection of each node was the result of an unknown, previously infected
node to which it is connected, so the component of the likelihood function for each infection will be
dependent on all previously infected nodes. Specifically, the likelihood function for a fixed given A
is
? ?
??
??
Y
Y
Y
??
L(A; D) =
P (i infected at ?ic |Xc (?ic ))? ? ?
P (i never infected|Xc (t) ? t)??
c?D
=
Y
c?D
i;?ic <?
??
??
Y
i;?ic <?
i;?ic =?
?
?1 ?
Y
j;?j ??i
?? ?
(1 ? w(?ic ? ?jc )Aji )?? ? ?
Y
Y
??
(1 ? Aji )?? .
i;?ic =? j;?jc <?
The likelihood function is composed of two terms. Consider some cascade c. First, for every node
i that got infected at time ?ic we compute the probability that at least one other previously infected
node could have infected it. For every non-infected node, we compute probability that no other
node ever infected it. Note that we assume that both the cascades and infections are conditionally
independent. Moreover, in the case of the SIS model each node can be infected multiple times
during a single cascade, so there will be multiple observed values for each ?ic and the likelihood
function would have to include each infection time in the product sum. We omit this detail for the
sake of clarity.
Then the maximum likelihood estimate of A is a solution to minA ? log(L(A; D)) subject to the
constraints 0 ? Aij ? 1 for each i, j.
Since a node cannot infect itself, the diagonal of A is strictly zero, leaving the optimization problem
with N (N ? 1) variables. This makes scaling to large networks problematic. We can, however,
break this problem into N independent subproblems, each with only N ? 1 variables by observing
that the incoming edges to a node can be inferred independently of the incoming edges of any other
node. Note that there is no restriction on the structure of A (for example, it is not in general a
stochastic matrix), so the columns of A can be inferred independently.
3
Let node i be the current node of interest for which we would like to infer its incoming connections.
Then the MLE of the ith column of A (designated A:,i ) that models the strength of i?s incoming
edges, is the solution to minA:,i ? log(Li (A:,i ; D)), subject to the constraints 0 ? Aji ? 1 for each
j, and where
?
?
?
?
Y
Y
Y
Y
?1 ?
?
Li (A:,i ; D) =
1 ? w(?ic ? ?jc )Aji ? ?
(1 ? Aji )? .
c?D;?ic <?
c?D;?ic =?
j;?j ??i
j?c;?jc <?
Lastly, the number of variables can further be reduced by observing that if node j is never infected
in the same cascade as node i, then the MLE of Aji = 0, and Aji can thus be excluded from the set
of variables. This dramatically reduces the number of variables as in practice the true A does not
induce large cascades, causing the cascades to be sparse in the number of nodes they infect [14, 17].
Towards the convex problem. The Hessian of the log-likelihood/likelihood functions are indefinite
in general, and this could make finding the globally optimal MLE for A difficult. Here, we derive a
convex optimization problem that is equivalent to the above MLE problem. This not only guarantees
convergence to a globally optimal solution, but it also allows for the use of highly optimized convex
programming methods.
We begin with the problem maxA:,i Li (A; D) subject to 0 ? Aji ? 1 for each j. Ifwe then make
Q
the change of variables Bji = 1 ? Aji and ?c = 1 ? j?Xc (? c ) 1 ? w(?ic ? ?jc )Aji , the problem
i
then becomes
Y
Y
Y
Bji
max
?c ?
?c ,B(:,i)
?c +
Y
j?Xc (?ic )
c?D;?ic <?
c?D;?ic =? j?c;?jc <?
subject to
0 ? Bji ? 1 ? j
0 ? ?c ? 1 ? c
c
1 ? wj + wjc Bji ? 1 ? c.
where we use shorthand notation wjc ? w(?ic ? ?jc ) (note that i is fixed). Also, note that the last
constraint on ?c is an inequality instead of an equality constraint. The objective function will strictly
increase when either increasing ?c or Bji , so this inequality will always be a binding constraint
at the solution, i.e., the equality will always hold. The reason we use the inequality is that this
turns the constraint into an upper bound on a posynomial (assuming w(t) ? 1 ?t). Furthermore,
with this change of variables the objective function is a monomial, and our problem satisfies all the
requirements for a geometric program. Now in order to convexify the geometric program, we apply
the change of variables ?? = log(?) and B?ji = log(Bji ), and take the reciprocal of the objective
function to turn it into a minimization problem. Finally, we take the logarithm of the objective
function as well as the constraints, and we are left with the following convex optimization problem
X
X
X
?ji
B
??
?c ?
min
?
?
?c ,B(:,i)
c?D;?ic =? j?c;?jc <?
c?D;?ic <?
subject to
?
Bji ? 0 ? j
??c ? 0 ? c
?
?
Y
?ji ? ? 0 ? c.
log ?exp ??c +
1 ? wjc + wjc exp B
j;?j ??i
Network sparsity. In general, social networks are sparse in a sense that on average nodes are
connected to a constant number rather than a constant fraction of other nodes in the network. To encourage a sparse MLE solution, an l1 penalty term can be added to the original (pre-convexification)
log-likelihood function, making the objective function
N
X
|Aji |
? log Li (A:,i |D) + ?
j=1
4
where ? is the sparsity parameter. Experimentation has indicated that including this penalty function
dramatically increases the performance of the method; however, if we apply the same convexification
process to this new augmented objective function the resulting function is
X
??
?c ?
c?D;tci <?
X
c?D;tci =?
X
?ji ? ?
B
j?c;tcj <?
N
X
?ji ,
exp B
j=1
which is concave and makes the whole problem non-convex. Instead, we propose the use of the
PN
1
penalty function ? j=1 1?A
. This penalty function still promotes a sparse solution, and even
ji
though we no longer have a geometric program, we can convexify the objective function and so the
global convexity is preserved:
X
c?D;tci <?
??
?c ?
X
X
c?D;tci =? j?c;tcj <?
?ji + ?
B
N
X
?ji .
exp ?B
j=1
Implementation. We use the SNOPT7 library to solve the likelihood optimization. We break the
network inference down into a series of subproblems corresponding to the inference of the inbound
edges of each node. Special concern is needed for the sparsity penalty function. The presence of
the l1 penalty function makes the method extremely effective at predicting the presence of edges
in the network, but it has the effect of distorting the estimated edge transmission probabilities. To
correct for this, the inference problem is first solved with the l1 penalty. Of the resulting solution,
the edge transmission probabilities that have been set zero are then restricted to remain at zero, and
the problem is then relaxed with the sparsity parameter set to ? = 0. This preserves the precision
and recall of the edge location prediction of the algorithm while still generating accurate edge transmission probability predictions. Moreover, with the implementation described above, most 1000
node networks can be inferred inside of 10 minutes, running on a laptop. A freely-distributable (but
non-scalable) MATLAB implementation can be found at http://snap.stanford.edu/connie.
3 Experiments
In this section, we evaluate our network inference method, which we will refer to as ConNIe
(Convex Network Inference) on a range of datasets and network topologies. This includes both
synthetically generated networks as well as real social networks, and both simulated and real diffusion data. In our experiments we focus on the SI model as it best applies to the real data we
use.
3.1 Synthetic data
Each of the synthetic data experiments begins with the construction of the network. We ran our
algorithm on a directed scale-free network constructed using the preferential attachment model [3],
and also on a Erd?os-R?enyi random graph. Both networks have 512 nodes and 1024 edges. In each
case, the networks were constructed as unweighted graphs, and then each edge (i, j) was assigned a
uniformly random transmission probability Aij between 0.05 and 1.
Transmission time model. In all of our experiments, we assume that the model w(t) of transmission times is known. We experimented with various realistic models for the transmission
??t
??
time [2]:
exponential(w(t) =x ?e
), power-law (w(t) ? (? ? 1)t ) and the Weibull distrik
k?1
bution w(t) = k x
e?( ? ) as it has been argued that Weibull distribution of ? = 9.5 and
?
?
k = 2.3 best describes the propagation model of the SARS outbreak in Hong Kong [26]. Notice that
our model does not make any assumption about the structure of w(t). For example, our approach
can handle the exponential and power-law that both have a mode at 0 and monotonically decrease in
t, as well as the Weibull distribution which can have a mode at any value.
We generate cascades by first selecting a random starting node of the infection. From there, the
infection is propagated to other nodes until no new infections occur: an infected node i transmits
the infection to uninfected j with probability Aij , and if transmission occurs then the propagation
time t is sampled according to the distribution w(t). The cascade is then given to the algorithm in
5
Precision
Precision
0.8
0.6
0.4
ConNIe
Netinf
0.2
0
0
1
1
0.8
0.8
Precision
1
0.6
0.4
ConNIe
Netinf
0.2
0.5
Recall
0
1
(a) PR Curve (PL)
0
0.4
ConNIe
Netinf
0.2
0.5
Recall
0
1
0
0.4
0.6
0.8
1
(c) PR Curve (WB)
0.11
ConNIe
Netinf
0.15
0.1
0.09
ConNIe
Netinf
0.08
MSE
MSE
0.2
0.1
0.2
Recall
(b) PR Curve (Exp)
ConNIe
Netinf
MSE
0.6
0.15
0.07
0.06
0.05
0.1
0.05
0.04
1000
1500 2000 2500
Num. of Edges
(d) MSE (PL)
3000
500
1000
1500
Num. of Edges
(e) MSE (Exp)
800
1000
1200
1400
Num. of Edges
1600
(f) MSE (WB)
Figure 1: (a)-(c): Precision and recall of ConNIe compared to NetInf for the SI diffusion model, run
on a synthetical scale-free graph with synthetically generated cascades. Transmission time models
used are power law (PL), exponential (Exp), and Weibull (WB). All networks contain 512 nodes,
and the weight of each edge was sampled from a uniform random distribution between 0 and 1. For
the MLE method, the PR curves were generated by varying the sparsity parameter ? between 0 and
1000. (d)-(f): Mean square error of the edge transmission probability of the two algorithms. The
dotted green line indicates the number of edges in the true network.
the form of a series of timestamps corresponding to when each node was infected. Not to make the
problem too easy we generate enough cascades so that 99% of all edges of the network transmitted
at least one infection. The number of cascades needed for this depends on the underlying network.
Overall, we generate on the same order of cascades as there are nodes in the network.
Quantifying performance. To assess the performance of ConNIe, we consider both the accuracy of
the edge prediction, as well as the accuracy of edge transmission probability. For edge prediction, we
recorded the precision and recall of the algorithm. We simply vary the value of ? to obtain networks
on different numbers of edges and then for each such inferred network we compute precision (the
number of correctly inferred edges divided by the total number of inferred edges), and recall (the
number of correctly inferred edges divided by the total number of edges in the unobserved network).
For large values of ? inferred networks have high precision but low recall, while for low values of ?
the precision will be poor but the recall will be high.
To assess the accuracy of the estimated edge transmission probabilities Aij , we compute the meansquare error (MSE). The MSE is taken over the union of potential edge positions (node pairs) where
there is an edge in the latent network, and the edge positions in which the algorithm has predicted
the presence of an edge. For potential edge locations with no edge present, the weight is set to 0.
Comparison to other methods. We compare our approach to NetInf which is an iterative algorithm
based on submodular function optimization [10]. NetInf first reconstructs the most likely structure of
each cascade, and then based on this reconstruction, it selects the next most likely edge of the social
network. The algorithm assumes that the weights of all edges have the same constant value (i.e.,
all nonzero Aij have the same value). To apply this algorithm to the problem we are considering,
we simply first use the NetInf to infer the network structure and then estimate the edge transmission
probabilities Aij by simply counting the fraction of times it was predicted that a cascade propagated
along the edge (i, j).
Figure 1 shows the precision-recall curves for the scale-free synthetic network with the three transmission models w(t). The results for the Erd?os-R?enyi random graph were omitted due to space
restrictions, but they were very similar. Notice our approach achieves the break even point (point
where precision equals recall) well above 0.85. This is a notable result: we were especially careful
not to generate too many cascades, since more cascades mean more evidence that makes the problem
easier. Also in Figure 1 we plot the Mean Squared Error of the estimates of the edge transmission
6
1
0.6
ConNIe
Netinf
0.4
0.2
0
1000
2000
3000
Number of Diffusions
0.8
0.6
0.4
4000
(a) PR Break-even (PL)
0.2
MSE at Break?even
0.8
PR Break?even
PR Break?even
1
ConNIe
Netinf
0
1000
2000
Number of Diffusions
ConNIe
Netinf
0.15
0.1
0.05
0
3000
0
(b) PR Break-even (EXP)
1000 2000 3000
Number of Diffusions
4000
(c) MSE (PL)
0.1
0.05
0
0
1000
2000
Number of Diffusions
3000
(d) MSE (EXP)
0.8
ConNIe
Netinf
0.7
1
10
Runtime
ConNIe
Netinf
0.15
PR Break?even Point
MSE at Break?even
0.2
0.6
0.5
ConNIe
Netinf
0
10
0.4
0
0.2
0.4
0.6
Noise to Signal Ratio
0.8
(e) PR vs. Noise/Signal
100
300
Network Size
500
(f) Runtime vs. Network Size
Figure 2: (a)-(b): Precision-Recall break-even point for the two methods as a function of the number of observed cascades, with a power law (PL) and exponential (EXP) transmission distribution.
(c)-(d): Mean Square Error at the PR-Break-even point as a function of the number of observed
cascades. (e) PR Break-even point versus the perturbation size applied to the infection times.
probability Aij as a function of the number of edges in the inferred network. The green vertical
line indicates the point where the inferred network contains the same number of edges as the real
network. Notice that ConNIe estimates the edge weights with error less than 0.05, which is more
than a factor of two smaller than the error of the NetInf algorithm. This, of course, is expected as
NetInf assumes the network edge weights are homogeneous, which is not the case.
We also tested the robustness of our algorithm. Figure 2 shows the accuracy (Precision-Recall
break-even point as well as edge MSE) as a function of the number of observed diffusions, as well
as the effect of noise in the infection times. Noise was added to the cascades by adding independent normally distribution perturbations to each of the observed infection times, and the noise to
signal ratio was calculated as the average perturbation over the average infection transmission time.
The plot shows that ConNIe is robust against such perturbations, as it can still accurately infer the
network with noise to signal ratios as high as 0.4.
3.2 Experiments on Real data
Real social networks. We also experiment with three real-world networks. First, we consider a
small collaboration network between 379 scientists doing research on networks. Second, we experiment on a real email social network of 593 nodes and 2824 edges that is based on the email
communication in a small European research institute.
For the edges in the collaboration network we simply randomly assigned their edge transmission
probabilities. For the email network, the number of emails sent from a person i to a person j
indicates the connection strength. Let there be a rumor cascading through a network, and assume
the probability that any one email contains this rumor is fixed at ?. Then if person i sent person j
mij emails, the probability of i infecting j with the rumor is Aij = 1 ? (1 ? ?)(1 ? ?)mij . The
parameter ? simply enforces a minimum edge weight between the pairs who have exchanged least
one email. We set ? = .001 and ? = .05.
For the email network we generated cascades using the power-law transmission time model, while
for the collaboration network we used the Weibull distribution for sampling transmission times. We
then ran the network inference on cascades, and Figure 3 gives the results. Similarly as with synthetic networks our approach achieves break even points of around 0.95 on both datasets. Moreover,
the edge transmission probability estimation error is less than 0.03. This is ideal: our method is capable of near perfect recovery of the underlying social network over which a relatively small number
of contagions diffused.
7
0.04
0.03
0.6
MSE
1
0.9
0.02
0.8
0.4
ConNIe
Netinf
0.2
0
0
0.5
Recall
0.01
0
2000
1
ConNIe
Netinf
2500
3000
Num. of Edges
0.7
3500
0.055
1
MSE
Precision
0.8
0.6
0.4
ConNIe
Netinf
0.2
0
ConNIe
Netinf
0.05
0
0.5
Recall
Network estimation
0.6
0.5
0.4
0.045
0.3
0.04
0.2
0.035
1
Precision
Precision
1
0.8
0.03
600
0.1
700
800
900
Num. of Edges
Edge weight error
1000
ConNIe
Netinf
0
0.2
0.4
Recall
0.6
0.8
Recommendation network
Figure 3: The precision-recall curve of the network estimation and the mean-square error (left) of
predicted transmission probabilities as a function of number edges being predicted (middle). Top
row shows the results for the email network, and the bottom row for the collaboration network.
(Right) Precision-recall curve on inferring a real recommendation network based on real product
recommendation data.
Real social networks and real cascades. Last, we investigate a large person-to-person recommendation network, consisting of four million people who made sixteen million recommendations on
half a million products [14]. People generate cascades as follows: a node (person) v buys product p
at time t, and then recommends it to nodes {w1 , . . . , wn }. These nodes wi can then buy the product
(with the option to recommend it to others). We trace cascades of purchases on a small subset of the
data. We consider a recommendation network of 275 users and 1522 edges and a set of 5,767 recommendations on 625 different products between a set of these users. Since the edge transmission
model is unknown we model it with a power-law distribution with parameter ? = 2.
We present the results in rightmost plot of Figure 3. Our approach is able to recover the underlying
social network surprisingly accurately. The break even point of our approach is 0.74 while NetInf
scores 0.55. Moreover, we also note that our approach took less than 20 seconds to infer this network. Since there are no ground truth edge transmission probabilities for us to compare against, we
can not compute the error of edge weight estimation.
4 Conclusion
We have presented a general solution to the problem of inferring latent social networks from the
network diffusion data. We formulated a maximum likelihood problem and by solving an equivalent
convex problem, we can guarantee the optimality of the solution. Furthermore, the l1 regularization
can be used to enforce a sparse solution while still preserving convexity. We evaluated our algorithm on a wide set of synthetic and real-world networks with several different cascade propagation
models. We found our method to be more general and robust than the competing approaches. Experiments reveal that our method near-perfectly recovers the underlying network structure as well as
the parameters of the edge transmission model. Moreover, our approach scales well as it can infer
optimal networks on thousand nodes in a matter of minutes.
One possible venue for future work is to also include learning the parameters of the underlying model
of diffusion times w(t). It would be fruitful to apply our approach to other datasets, like the spread
of a news story breaking across the blogosphere, a SARS outbreak, or a new marketing campaign
on a social networking website, and to extend it to additional models of diffusion. By inferring and
modeling the structure of such latent social networks, we can gain insight into positions and roles
various nodes play in the diffusion process and assess the range of influence of nodes in the network.
Acknowledgements. This research was supported in part by NSF grants CNS-1010921, IIS1016909, LLNL grant B590105, the Albert Yu and Mary Bechmann Foundation, IBM, Lightspeed,
Microsoft and Yahoo.
8
References
[1] A. Ahmed and E. Xing. Recovering time-varying networks of dependencies in social and
biological studies. PNAS, 106(29):11878, 2009.
[2] N. T. J. Bailey. The Mathematical Theory of Infectious Diseases and its Applications. Hafner
Press, 2nd edition, 1975.
[3] A.-L. Barab?asi and R. Albert. Emergence of scaling in random networks. Science, 1999.
[4] M. Choudhury, W. A. Mason, J. M. Hofman, and D. J. Watts. Inferring relevant social networks
from interpersonal communication. In WWW ?10, pages 301?310, 2010.
[5] N. Eagle, A. S. Pentland, and D. Lazer. Inferring friendship network structure by using mobile
phone data. PNAS, 106(36):15274?15278, 2009.
[6] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostat, 9(3):432?441, 2008.
[7] L. Getoor, N. Friedman, D. Koller, and B. Taskar. Learning probabilistic models of link structure. JMLR, 3:707, 2003.
[8] Z. Ghahramani. Learning dynamic Bayesian networks. Adaptive Processing of Sequences and
Data Structures, page 168, 1998.
[9] L. Giot, J. Bader, C. Brouwer, A. Chaudhuri, B. Kuang, Y. Li, Y. Hao, C. Ooi, B. Godwin,
et al. A protein interaction map of Drosophila melanogaster. Science, 302(5651):1727, 2003.
[10] M. Gomez-Rodriguez, J. Leskovec, and A. Krause. Inferring networks of diffusion and influence. In KDD ?10, 2010.
[11] S. Hill, F. Provost, and C. Volinsky. Network-based marketing: Identifying likely adopters via
consumer networks. Statistical Science, 21(2):256?276, 2006.
[12] R. Jansen, H. Yu, D. Greenbaum, et al. A bayesian networks approach for predicting proteinprotein interactions from genomic data. Science, 302(5644):449?453, October 2003.
[13] G. Kossinets and D. J. Watts. Empirical analysis of an evolving social network. Science, 2006.
[14] J. Leskovec, L. A. Adamic, and B. A. Huberman. The dynamics of viral marketing. ACM
TWEB, 1(1):2, 2007.
[15] J. Leskovec, L. Backstrom, and J. Kleinberg. Meme-tracking and the dynamics of the news
cycle. In KDD ?09, pages 497?506, 2009.
[16] J. Leskovec and E. Horvitz. Planetary-scale views on a large instant-messaging network. In
WWW ?08, 2008.
[17] J. Leskovec, A. Singh, and J. M. Kleinberg. Patterns of influence in a recommendation network.
In PAKDD ?06, pages 380?389, 2006.
[18] D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In CIKM
?03, pages 556?559, 2003.
[19] N. Meinshausen and P. Buehlmann. High-dimensional graphs and variable selection with the
lasso. The Annals of Statistics, pages 1436?1462, 2006.
[20] M. Middendorf, E. Ziv, and C. Wiggins. Inferring network mechanisms: the Drosophila
melanogaster protein interaction network. PNAS, 102(9):3192, 2005.
[21] M. Schmidt, A. Niculescu-Mizil, and K. Murphy. Learning graphical model structure using
l1-regularization paths. In AAAI, volume 22, page 1278, 2007.
[22] L. Song, M. Kolar, and E. Xing. Time-varying dynamic bayesian networks. In NIPS ?09.
[23] B. Taskar, M. F. Wong, P. Abbeel, and D. Koller. Link prediction in relational data. NIPS ?03.
[24] J. Vert and Y. Yamanishi. Supervised graph inference. NIPS ?05.
[25] M. J. Wainwright, P. Ravikumar, and J. D. Lafferty. High-dimensional graphical model selection using ?1 -regularized logistic regression. In PNAS, 2006.
[26] J. Wallinga and P. Teunis. Different epidemic curves for severe acute respiratory syndrome
reveal similar impacts of control measures. Amer. J. of Epidemiology, 160(6):509?516, 2004.
[27] S. Wasserman and K. Faust. Social Network Analysis : Methods and Applications. Cambridge
University Press, 1994.
9
| 4113 |@word kong:1 middle:1 nd:1 tedious:1 sex:1 propagate:1 meansquare:1 covariance:1 series:3 contains:2 selecting:3 score:1 ours:1 rightmost:1 past:1 horvitz:1 recovered:1 current:1 activation:1 si:7 must:2 visible:1 happen:1 realistic:1 timestamps:1 kdd:2 remove:1 plot:3 v:2 generative:1 leaf:1 half:1 website:1 ith:1 reciprocal:1 record:1 num:5 node:75 location:2 mathematical:2 along:2 constructed:2 become:3 shorthand:1 inside:1 pairwise:2 expected:1 behavior:1 globally:2 considering:1 increasing:1 becomes:4 begin:2 discover:1 underlying:10 moreover:7 estimating:1 notation:1 laptop:1 what:1 weibull:5 maxa:1 infecting:1 unobserved:6 finding:1 convexify:2 guarantee:3 temporal:1 ooi:1 every:2 collecting:2 hypothetical:1 concave:1 buehlmann:1 runtime:2 control:1 normally:1 grant:2 omit:1 before:1 engineering:1 scientist:1 initiated:1 path:1 meinshausen:1 collect:1 limited:1 campaign:1 range:2 adoption:2 directed:2 enforces:1 practice:1 union:1 aji:11 empirical:1 drug:1 evolving:2 got:2 cascade:36 asi:1 vert:1 pre:1 induce:1 protein:3 get:3 cannot:1 selection:2 impossible:2 influence:4 wong:1 restriction:2 equivalent:3 fruitful:1 www:2 map:1 starting:1 citing:1 convex:11 duration:2 formulate:2 independently:4 recovery:2 identifying:1 wasserman:1 estimator:1 insight:1 cascading:1 population:3 handle:3 traditionally:1 transmit:1 annals:1 target:1 construction:1 play:1 massive:1 user:3 programming:3 homogeneous:2 uninfected:1 expensive:1 convexification:2 observed:9 bottom:1 role:1 taskar:2 solved:2 thousand:4 wj:1 connected:5 news:2 cycle:1 decrease:1 liben:1 ran:2 disease:5 questionnaire:1 convexity:3 meme:1 dynamic:4 singh:1 solving:1 hofman:1 synthetical:1 tcj:2 easily:1 seth:1 various:2 rumor:3 enyi:2 effective:1 heuristic:1 stanford:5 solve:1 snap:1 faust:1 reconstruct:2 epidemic:2 statistic:1 think:1 emergence:2 itself:1 distributable:1 sequence:1 myers:1 interview:1 took:2 blogger:1 propose:1 interaction:7 product:8 reconstruction:1 neighboring:1 causing:1 relevant:1 chaudhuri:1 infectious:1 getting:1 interacted:1 convergence:1 transmission:30 requirement:1 generating:1 perfect:1 yamanishi:1 inbound:1 depending:1 develop:1 derive:1 propagating:1 recovering:1 c:1 predicted:4 closely:1 tci:4 correct:1 stochastic:1 bader:1 observational:1 adjacency:2 explains:1 argued:1 abbeel:1 drosophila:2 biological:1 extension:1 strictly:3 pl:6 hold:2 around:1 ic:22 ground:1 deciding:1 exp:10 vary:1 achieves:2 nowell:1 omitted:1 estimation:6 spreading:1 weighted:2 minimization:1 genomic:1 always:2 aim:1 rather:1 choudhury:1 pn:1 varying:3 mobile:1 focus:1 likelihood:14 mainly:1 indicates:3 sense:3 inference:9 dependent:1 niculescu:1 typically:1 hidden:2 koller:2 selects:1 overall:1 ziv:1 yahoo:1 jansen:1 special:1 equal:1 once:2 never:4 sampling:1 yu:2 nearly:1 purchase:2 future:1 others:1 recommend:1 randomly:2 composed:1 preserve:1 murphy:1 hafner:1 consisting:1 cns:1 microsoft:1 friedman:2 interest:1 highly:1 investigate:1 severe:1 accurate:2 edge:65 encourage:1 capable:1 preferential:1 netinf:25 logarithm:1 exchanged:1 leskovec:6 column:2 modeling:1 wb:3 interpersonal:1 infected:43 introducing:1 entry:1 subset:1 uniform:1 kuang:1 too:2 reported:1 dependency:2 synthetic:7 person:7 venue:1 epidemiology:1 probabilistic:3 w1:1 again:2 squared:1 recorded:1 aaai:1 reconstructs:1 messaging:1 li:5 potential:2 infects:3 includes:1 matter:3 notable:1 jc:8 explicitly:2 caused:1 depends:1 break:16 lot:1 view:1 observing:2 doing:1 bution:1 start:1 relied:1 recover:4 option:1 xing:2 ass:3 square:3 accuracy:4 became:1 who:8 efficiently:1 identify:1 bayesian:3 accurately:2 biostat:1 kossinets:1 reach:1 networking:1 whenever:1 infection:32 email:11 against:2 volinsky:1 involved:1 regress:1 tweb:1 transmits:1 lazer:1 recovers:3 static:2 propagated:2 sampled:6 newly:1 dataset:1 gain:1 recall:18 infers:1 posynomial:1 supervised:1 methodology:1 erd:2 formulation:4 evaluated:1 though:2 amer:1 furthermore:3 marketing:5 lastly:1 until:1 hand:1 adamic:1 o:2 propagation:7 rodriguez:1 mode:2 logistic:1 reveal:4 indicated:1 mary:1 effect:2 contain:1 true:2 equality:2 assigned:2 regularization:2 excluded:1 nonzero:1 conditionally:1 during:2 self:1 encourages:2 hong:1 trying:1 mina:2 hill:1 l1:7 llnl:1 recently:1 viral:3 ji:8 volume:1 million:3 extend:1 refer:1 cambridge:1 similarly:4 submodular:1 longer:1 acute:1 sick:1 influencer:1 phone:2 scenario:2 inequality:3 blog:1 success:1 transmitted:1 minimum:1 preserving:1 relaxed:1 additional:1 syndrome:1 freely:1 period:1 monotonically:1 signal:4 multiple:2 pnas:4 infer:10 reduces:4 ahmed:1 long:2 divided:2 mle:7 ravikumar:1 promotes:1 barab:1 impact:1 prediction:7 scalable:1 regression:1 albert:2 adopting:1 cell:2 preserved:1 whereas:1 krause:1 interval:2 source:3 leaving:1 subject:5 sent:2 greenbaum:1 lafferty:1 call:2 near:5 presence:3 synthetically:2 counting:1 ideal:1 easy:1 enough:1 recommends:1 switch:1 wn:1 hastie:1 perfectly:3 lasso:3 topology:1 competing:1 lightspeed:1 knowing:2 inactive:1 whether:1 distorting:1 penalty:9 song:1 hessian:1 matlab:1 dramatically:2 reduced:1 http:1 specifies:1 generate:5 problematic:1 nsf:1 notice:3 dotted:1 estimated:2 cikm:1 correctly:2 tibshirani:1 write:2 indefinite:1 four:1 threshold:2 clarity:1 diffusion:23 graph:7 fraction:2 sum:2 run:1 inverse:1 place:1 almost:1 scaling:2 bound:1 gomez:1 nonnegative:1 eagle:1 strength:2 occur:1 constraint:7 pakdd:1 ri:2 sake:1 kleinberg:3 optimality:2 min:1 extremely:1 injection:1 relatively:1 department:1 designated:1 according:1 watt:2 poor:1 remain:2 describes:1 smaller:1 across:1 wi:1 backstrom:1 modification:1 making:1 outbreak:2 restricted:1 pr:12 taken:1 computationally:1 previously:5 remains:1 turn:2 mechanism:1 needed:2 know:2 available:2 experimentation:1 apply:4 observe:10 enforce:1 bailey:1 schmidt:1 robustness:1 original:1 assumes:5 running:1 include:3 top:1 brouwer:1 graphical:6 instant:1 xc:5 calculating:1 ghahramani:1 especially:2 diffused:2 objective:7 already:2 question:1 added:2 occurs:1 traditional:1 diagonal:1 wjc:4 link:4 separating:1 simulated:1 whom:1 collected:1 considers:1 reason:1 assuming:1 consumer:1 ratio:3 kolar:1 difficult:1 susceptible:6 october:1 subproblems:2 hao:1 trace:2 implementation:3 ambitious:1 unknown:4 upper:1 vertical:1 observation:1 datasets:4 pentland:1 relational:2 ever:1 communication:2 perturbation:4 provost:1 wiggins:1 inferred:13 pair:3 connection:2 optimized:1 planetary:1 nip:3 jure:2 address:2 able:1 usually:5 pattern:1 sparsity:7 program:4 max:1 including:1 green:2 wainwright:1 power:6 getoor:1 regularized:1 predicting:2 mizil:1 library:1 contagion:9 connie:23 attachment:1 geometric:3 acknowledgement:1 determining:1 sir:2 law:6 men:2 versus:1 sixteen:1 foundation:1 purchasing:1 story:1 collaboration:4 ibm:1 row:2 course:1 surprisingly:1 last:2 free:3 supported:1 monomial:1 aij:11 institute:2 neighbor:4 wide:2 sparse:7 distributed:1 curve:8 calculated:1 world:4 unweighted:1 reside:1 made:2 commonly:1 adaptive:1 social:32 melanogaster:2 forever:1 proteinprotein:1 global:1 active:1 incoming:4 buy:2 latent:10 iterative:1 additionally:1 robust:2 adopter:1 mse:15 european:1 did:1 spread:6 whole:4 noise:6 edition:1 augmented:1 respiratory:1 precision:18 inferring:14 position:3 explicit:1 exponential:4 governed:2 breaking:1 jmlr:1 infect:3 minute:4 down:3 friendship:1 mason:1 experimented:1 concern:1 evidence:1 intractable:1 adding:1 easier:1 simply:6 likely:3 blogosphere:1 tracking:1 recommendation:8 binding:1 applies:1 mij:2 truth:1 satisfies:1 acm:1 bji:7 conditional:1 goal:2 identity:1 formulated:1 quantifying:1 careful:1 towards:1 feasible:1 hard:2 change:3 included:1 specifically:2 determined:1 uniformly:1 huberman:1 called:1 total:2 attempted:1 people:5 support:1 evaluate:2 tested:1 handling:1 |
3,439 | 4,114 | Slice sampling covariance hyperparameters
of latent Gaussian models
Ryan Prescott Adams
Dept. Computer Science
University of Toronto
Iain Murray
School of Informatics
University of Edinburgh
Abstract
The Gaussian process (GP) is a popular way to specify dependencies between random variables in a probabilistic model. In the Bayesian framework
the covariance structure can be specified using unknown hyperparameters.
Integrating over these hyperparameters considers different possible explanations for the data when making predictions. This integration is often performed using Markov chain Monte Carlo (MCMC) sampling. However, with
non-Gaussian observations standard hyperparameter sampling approaches
require careful tuning and may converge slowly. In this paper we present
a slice sampling approach that requires little tuning while mixing well in
both strong- and weak-data regimes.
1
Introduction
Many probabilistic models incorporate multivariate Gaussian distributions to explain dependencies between variables. Gaussian process (GP) models and generalized linear mixed
models are common examples. For non-Gaussian observation models, inferring the parameters that specify the covariance structure can be difficult. Existing computational methods
can be split into two complementary classes: deterministic approximations and Monte Carlo
simulation. This work presents a method to make the sampling approach easier to apply.
In recent work Murray et al. [1] developed a slice sampling [2] variant, elliptical slice sampling, for updating strongly coupled a-priori Gaussian variates given non-Gaussian observations. Previously, Agarwal and Gelfand [3] demonstrated the utility of slice sampling for
updating covariance parameters, conventionally called hyperparameters, with a Gaussian
observation model, and questioned the possibility of slice sampling in more general settings.
In this work we develop a new slice sampler for updating covariance hyperparameters. Our
method uses a robust representation that should work well on a wide variety of problems,
has very few technical requirements, little need for tuning and so should be easy to apply.
1.1
Latent Gaussian models
We consider generative models of data that depend on a vector of latent variables f that are
Gaussian distributed with covariance ?? set by unknown hyperparameters ?. These models
are common in the machine learning Gaussian process literature [e.g. 4] and throughout the
statistical sciences. We use standard notation for a Gaussian distribution with mean m and
covariance ?,
1
N (f ; m, ?) ? |2??|? /2 exp ? 12 (f ?m)> ??1 (f ?m) ,
(1)
and use f ? N (m, ?) to indicate that f is drawn from a distribution with the density in (1).
1
0.5
0.1
0
0.08
p(log l | f)
Latent values, f
1
?0.5
?1
l = 0.1
l = 0.5
l=2
?1.5
?2
0
0.2
l = 0.1
l = 0.5
l=2
0.06
0.04
0.02
0.4
0.6
Input Space, x
0.8
0
?2
10
1
?1
0
10
10
1
10
lengthscale, l
(a) Prior draws
(b) Lengthscale given f
Figure 1: (a) Shows draws from the prior over f using three different lengthscales in the squared
exponential covariance (2). (b) Shows the posteriors over log-lengthscale for these three draws.
The generic form of the generative models we consider is summarized by
covariance hyperparameters ? ? ph,
latent variables f ? N (0, ?? ),
and a conditional likelihood P (data |f ) = L(f ).
The methods discussed in this paper apply to covariances ?? that are arbitrary positive
definite functions parameterized by ?. However, our experiments focus on the popular case
where the covariance is associated with N input vectors {xn }N
n=1 through the squaredexponential kernel,
PD (x ?x )2
(2)
(?? )ij = k(xi , xj ) = ?f2 exp ? 21 d=1 d,i `2 d,j ,
d
with hyperparameters ? = {?f2 , {`d }}. Here ?f2 is the ?signal variance? controlling the overall
scale of the latent variables f . The `d give characteristic lengthscales for converting the
distances between inputs into covariances between the corresponding latent values f .
For non-Gaussian likelihoods we wish to sample from the joint posterior over unknowns,
P (f , ? |data) = Z1 L(f ) N (f ; 0, ?? ) ph(?) .
(3)
We would like to avoid implementing new code or tuning algorithms for different covariances
?? and conditional likelihood functions L(f ).
2
Markov chain inference
A Markov chain transition operator T (z 0 ? z) defines a conditional distribution on a new
position z 0 given an Rinitial position z. The operator is said to leave a target distribution ?
invariant if ?(z 0 ) = T (z 0 ? z) ?(z) dz. A standard way to sample from the joint posterior (3) is to alternately simulate transition operators that leave its conditionals, P (f |data, ?)
and P (? | f ), invariant. Under fairly mild conditions the Markov chain will equilibrate towards the target distribution [e.g. 5].
Recent work has focused on transition operators for updating the latent variables f given
data and a fixed covariance ?? [6, 1]. Updates to the hyperparameters for fixed latent
variables f need to leave the conditional posterior,
P (? |f ) ? N (f ; 0, ?? ) ph(?),
(4)
invariant. The simplest algorithm for this is the Metropolis?Hastings operator, see Algorithm 1. Other possibilities include slice sampling [2] and Hamiltonian Monte Carlo [7, 8].
Alternately fixing the unknowns f and ? is appealing from an implementation standpoint.
However, the resulting Markov chain can be very slow in exploring the joint posterior distribution. Figure 1a shows latent vector samples using squared-exponential covariances with
different lengthscales. These samples are highly informative about the lengthscale hyperparameter that was used, especially for short lengthscales. The sharpness of P (? | f ), Figure 1b,
dramatically limits the amount that any Markov chain can update the hyperparameters ?
for fixed latent values f .
2
Algorithm 1 M?H transition for fixed f
Algorithm 2 M?H transition for fixed ?
Input: Current f and hyperparameters ?;
proposal dist. q; covariance function ?() .
Output: Next hyperparameters
1: Propose: ? 0 ? q(? 0 ; ?)
2: Draw u ? Uniform(0, 1)
N (f ;0,? 0 ) p (? 0 ) q(? ; ? 0 )
3: if u < N (f ;0,?? ) ph(?) q(?0 ; ?)
?
h
4:
return ?0
. Accept new state
5: else
6:
return ?
. Keep current state
Input: Current state ?, f ; proposal dist. q;
covariance function ?() ; likelihood L().
Output: Next ?, f
?1
1: Solve for N (0, I) variate: ? = L?
f
?
0
0
2: Propose ? ? q(? ; ?)
3: Compute implied values: f 0 = L??0 ?
4: Draw u ? Uniform(0, 1)
L(f 0 ) p (? 0 ) q(? ; ? 0 )
5: if u < L(f ) ph(?) q(?0 ; ?)
h
6:
return ?0 , f 0
. Accept new state
7: else
8:
return ?, f
. Keep current state
2.1
Whitening the prior
Often the conditional likelihood is quite weak; this is why strong prior smoothing assumptions are often introduced in latent Gaussian models. In the extreme limit in
which there is no data, i.e. L is constant, the target distribution is the prior model,
P (f , ?) = N (f ; 0, ?? ) ph(?). Sampling from the prior should be easy, but alternately fixing f and ? does not work well because they are strongly coupled. One strategy is to
reparameterize the model so that the unknown variables are independent under the prior.
Independent random variables can be identified from a commonly-used generative procedure
for the multivariate Gaussian distribution. A vector of independent normals, ?, is drawn
independently of the hyperparameters and then deterministically transformed:
? ? N (0, I),
f = L?? ?,
where L?? L?>? = ?? .
(5)
Notation: Throughout this paper LC will be any user-chosen square root of covariance
matrix C. While any matrix square root can be used, the lower-diagonal Cholesky decomposition is often the most convenient. We would reserve C 1/2 for the principal square root,
because other square roots do not behave like powers: for example, chol(C)?1 6= chol(C ?1 ).
We can choose to update the hyperparameters ? for fixed ? instead of fixed f . As the
original latent variables f are deterministically linked to the hyperparameters ? in (5), these
updates will actually change both ? and f . The samples in Figure 1a resulted from using
the same whitened variable ? with different hyperparameters. They follow the same general
trend, but vary over the lengthscales used to construct them.
The posterior over hyperparameters for fixed ? is apparent by applying Bayes rule to the
generative procedure in (5), or one can laboriously obtain it by changing variables in (3):
P (? | ?, data) ? P (?, ?, data) = P (?, f = L?? ?, data) |L?? | ? ? ? ? ? L(f (?, ?)) ph(?).
(6)
Algorithm 2 is the Metropolis?Hastings operator for this distribution. The acceptance rule
now depends on the latent variables through the conditional likelihood L(f ) instead of the
prior N (f ; 0, ?? ) and these variables are automatically updated to respect the prior. In the
no-data limit, new hyperparameters proposed from the prior are always accepted.
3
Surrogate data model
Neither of the previous two algorithms are ideal for statistical applications, which is illustrated in Figure 2. Algorithm 2 is ideal in the ?weak data? limit where the latent variables f
are distributed according to the prior. In the example, the likelihoods are too restrictive for
Algorithm 2?s proposal to be acceptable. In the ?strong data? limit, where the latent variables f are fixed by the likelihood L, Algorithm 1 would be ideal. However, the likelihood
terms in the example are not so strong that the prior can be ignored.
For regression problems with Gaussian noise the latent variables can be marginalised out analytically, allowing hyperparameters to be accepted or rejected according to their marginal
posterior P (? |data). If latent variables are required they can be sampled directly from
the conditional posterior P (f |?, data). To build a method that applies to non-Gaussian
likelihoods, we create an auxiliary variable model that introduces surrogate Gaussian observations that will guide joint proposals of the hyperparameters and latent variables.
3
0.5
current state f
whitened prior proposal
surrogate data proposal
Observations, y
0
?0.5
?1
?1.5
?2
0
0.2
0.4
0.6
Input Space, x
0.8
1
Figure 2: A regression problem with Gaussian observations illustrated by 2? gray bars. The
current state of the sampler has a short lengthscale hyperparameter (` = 0.3); a longer lengthscale
(` = 1.5) is being proposed. The current latent variables do not lie on a straight enough line for the
long lengthscale to be plausible. Whitening the prior (Section 2.1) updates the latent variables to
a straighter line, but ignores the observations. A proposal using surrogate data (Section 3, with S?
set to the observation noise) sets the latent variables to a draw that is plausible for the proposed
lengthscale while being close to the current state.
We augment the latent Gaussian model with auxiliary variables, g, a noisy version of the
true latent variables:
P (g | f , ?) = N (g; f , S? ).
(7)
For now S? is an arbitrary free parameter that could be set by hand to either a fixed
value or a value that depends on the current hyperparameters ?. We will discuss how to
automatically set the auxiliary noise covariance S? in Section 3.2.
The original model, f ? N (0, ?? ) and (7) define a joint auxiliary distribution P (f , g | ?)
given the hyperparameters. It is possible to sample from this distribution in the opposite
order, by first drawing the auxiliary values from their marginal distribution
P (g | ?) = N (g; 0, ?? +S? ),
(8)
and then sampling the model?s latent values conditioned on the auxiliary values from
P (f | g, ?) = N (f ; m?,g , R? ), where some standard manipulations give:
?1 ?1
R? = (??1
= ?? ??? (?? +S? )?1 ?? = S? ?S? (S? +?? )?1 S? ,
? +S? )
m?,g = ?? (?? +S? )?1 g = R? S??1 g.
(9)
That is, under the auxiliary model the latent variables of interest are drawn from their
posterior given the surrogate data g. Again we can describe the sampling process via a
draw from a spherical Gaussian:
? ? N (0, I),
f = LR? ? + m?,g ,
>
where LR? LR
= R? .
?
(10)
We then condition on the ?whitened? variables ? and the surrogate data g while updating
the hyperparameters ?. The implied latent variables f (?, ?, g) will remain a plausible draw
from the surrogate posterior for the current hyperparameters. This is illustrated in Figure 2.
We can leave the joint distribution (3) invariant by updating the following conditional
distribution derived from the above generative model:
P (? |?, g, data) ? P (?, ?, g, data) ? L f (?, ?, g) N (g; 0, ?? +S? ) ph(?).
(11)
The Metropolis?Hastings Algorithm 3 contains a ratio of these terms in the acceptance rule.
3.1
Slice sampling
The Metropolis?Hastings algorithms discussed so far have a proposal distribution q(?0 ; ?)
that must be set and tuned. The efficiency of the algorithms depend crucially on careful
choice of the scale ? of the proposal distribution. Slice sampling [2] is a family of adaptive
search procedures that are much more robust to the choice of scale parameter.
4
Algorithm 3 Surrogate data M?H
Algorithm 4 Surrogate data slice sampling
Input: ?, f ; prop. dist. q; model of Sec. 3.
Output: Next ?, f
1: Draw surrogate data: g ? N (f , S? )
2: Compute implied latent variates:
?1
? = LR
(f ? m?,g )
?
3: Propose ? 0 ? q(? 0 ; ?)
4: Compute function f 0 = LR?0 ? + m?0 ,g
5: Draw u ? Uniform(0, 1)
L(f 0 ) N (g;0,? 0+S 0 ) p (? 0 ) q(? ; ? 0 )
6: if u < L(f ) N (g;0,?? +S? ) p h(?) q(?0 ; ?)
?
?
h
7:
return ?0 , f 0
. Accept new state
8: else
9:
return ?, f
. Keep current state
Input: ?, f ; scale ?; model of Sec. 3.
Output: Next f , ?
1: Draw surrogate data: g ? N (f , S? )
2: Compute implied latent variates:
?1
? = LR
(f ? m?,g )
?
3: Randomly center a bracket:
v ? Uniform(0, ?), ?min = ??v, ?max = ?min +?
4: Draw u ? Uniform(0, 1)
5: Determine threshold:
y = u L(f ) N (g; 0, ?? +S? ) ph(?)
6: Draw proposal: ? 0 ? Uniform(?min , ?max )
7: Compute function f 0 = LR?0 ? + m?0 ,g
8: if L(f 0 ) N (g; 0, ??0 +S?0 ) ph(? 0 ) > y
9:
return f 0 , ?0
10: else if ? 0 < ?
11:
Shrink bracket minimum: ?min = ?0
12: else
13:
Shrink bracket maximum: ?max = ?0
14: goto 6
Algorithm 4 applies one possible slice sampling algorithm to a scalar hyperparameter ? in
the surrogate data model of this section. It has a free parameter ?, the scale of the initial
proposal distribution. However, careful tuning of this parameter is not required. If the initial
scale is set to a large value, such as the width of the prior, then the width of the proposals will
shrink to an acceptable range exponentially quickly. Stepping-out procedures [2] could be
used to adapt initial scales that are too small. We assume that axis-aligned hyperparameter
moves will be effective, although reparameterizations could improve performance [e.g. 9].
3.2 The auxiliary noise covariance S?
The surrogate data g and noise covariance S? define a pseudo-posterior distribution that
softly specifies a plausible region within which the latent variables f are updated. The noise
covariance determines the size of this region. The first two baseline algorithms of Section 2
result from limiting cases of S? = ?I: 1) if ? = 0 the surrogate data and the current latent
variables are equal and the acceptance ratio reduces to that of Algorithm 1. 2) as ? ? ?
the observations are uninformative about the current state and the pseudo-posterior tends
to the prior. In the limit, the acceptance ratio reduces to that of Algorithm 2. One could
choose ? based on preliminary runs, but such tuning would be burdensome.
Q
For likelihood terms that factorize, L(f ) = i Li (fi ), we can measure how much the likelihood restricts each variable individually:
P (fi | Li , ?) ? Li (fi ) N (fi ; 0, (?? )ii ).
(12)
A Gaussian can be fitted by moment matching or a Laplace approximation (matching second derivatives at the mode). Such fits, or close approximations, are often possible analytically and can always be performed numerically as the distribution is only one-dimensional.
Given a Gaussian fit to the site-posterior (12) with variance vi , we can set the auxiliary noise to a level that would result in the same posterior variance at that site alone:
?1
(S? )ii = (vi?1 ?(?? )ii )?1 . (Any negative (S? )ii must be thresholded.) The moment matching procedure is a grossly simplified first step of ?assumed density filtering? or ?expectation
propagation? [10], which are too expensive for our use in the inner-loop of a Markov chain.
4
Related work
We have discussed samplers that jointly update strongly-coupled latent variables and hyperparameters. The hyperparameters can move further in joint moves than their narrow
conditional posteriors (e.g., Figure 1b) would allow. A generic way of jointly sampling realvalued variables is Hamiltonian/Hybrid Monte Carlo (HMC) [7, 8]. However, this method
is cumbersome to implement and tune, and using HMC to jointly update latent variables
and hyperparameters in hierarchical models does not itself seem to improve sampling [11].
Christensen et al. [9] have also proposed a robust representation for sampling in latent
Gaussian models. They use an approximation to the target posterior distribution to con5
struct a reparameterization where the unknown variables are close to independent. The
approximation replaces the likelihood with a Gaussian form proportional to N (f ; ?f , ?(?f )):
? 2 log L(f )
?f = argmax L(f ),
?
(13)
?
(
f
)
=
ij
f
?fi ?fj ,
?
f
where ? is often diagonal, or it was suggested one would only take the diagonal part.
This Taylor approximation looks like a Laplace approximation, except that the likelihood
function is not a probability density in f . This likelihood fit results in an approximate
Gaussian posterior N (f ; m?,g=?f , R? ) as found in (9), with noise S? = ?(?f )?1 and data g = ?f .
Thinking of the current latent variables as a draw from this approximate posterior,
?1
? ? N (0, I), f = LR? ? + m?,?f , suggests using the reparameterization ? = LR
(f ? m?,?f ).
?
We can then fix the new variables and update the hyperparameters under
P (? | ?, data) ? L(f (?, ?)) N (f (?, ?); 0, ?? ) ph(?) |LR? | .
(14)
When the likelihood is Gaussian, the reparameterized variables ? are independent of each
other and the hyperparameters. The hope is that approximating non-Gaussian likelihoods
will result in nearly-independent parameterizations on which Markov chains will mix rapidly.
Taylor expanding some common log-likelihoods around the maximum is not well defined,
for example approximating probit or logistic likelihoods for binary classification, or Poisson observations with zero counts. These Taylor expansions could be seen as giving flat or
undefined Gaussian approximations that do not reweight the prior. When all of the likelihood terms are flat the reparameterization approach reduces to that of Section 2.1. The
alternative S? auxiliary covariances that we have proposed could be used instead.
The surrogate data samplers of Section 3 can also be viewed as using reparameterizations,
?1
by treating ? = LR
(f ? m?,g ) as an arbitrary random reparameterization for making pro?
posals. A proposal density q(? 0 , ?0 ; ?, ?) in the reparameterized space must be multiplied by
?1
the Jacobian |LR
| to give a proposal density in the original parameterization. The proba?0
bility of proposing the reparameterization must also be included in the Metropolis?Hastings
acceptance probability:
?1
P (? 0 ,f 0 | data)?P (g | f 0 ,S?0 )?q(?;? 0 ) |LR
|
?
min 1, P (?,f | data)?P (g | f ,S )?q(?0 ;?) |L?1 | .
(15)
?
R 0
?
A few lines of linear algebra confirms that, as it must do, the same acceptance ratio results
as before. Alternatively, substituting (3) into (15) shows that the acceptance probability
is very similar to that obtained by applying Metropolis?Hastings to (14) as proposed by
Christensen et al. [9]. The differences are that the new latent variables f 0 are computed
using different pseudo-posterior means and the surrogate data method has an extra term
for the random, rather than fixed, choice of reparameterization.
The surrogate data sampler is easier to implement than the previous reparameterization
work because the surrogate posterior is centred around the current latent variables. This
means that 1) no point estimate, such as the maximum likelihood ?f , is required. 2) picking
the noise covariance S? poorly may still produce a workable method, whereas a fixed reparameterized can work badly if the true posterior distribution is in the tails of the Gaussian
approximation. Christensen et al. [9] pointed out that centering the approximate Gaussian likelihood in their reparameterization around the current state is tempting, but that
computing the Jacobian of the transformation is then intractable. By construction, the
surrogate data model centers the reparameterization near to the current state.
5
Experiments
We empirically compare the performance of the various approaches to GP hyperparameter
sampling on four data sets: one regression, one classification, and two Cox process inference
problems. Further details are in the rest of this section, with full code as supplementary
material. The results are summarized in Figure 3 followed by a discussion section.
6
In each of the experimental configurations, we ran ten independent chains with different
random seeds, burning in for 1000 iterations and sampling for 5000 iterations. We quantify
the mixing of the chain by estimating the effective number of samples of the complete
data likelihood trace using R-CODA [12], and compare that with three cost metrics: the
number of hyperparameter settings considered (each requiring a small number of covariance
decompositions with O(n3 ) time complexity), the number of likelihood evaluations, and the
total elapsed time on a single core of an Intel Xeon 3GHz CPU.
The experiments are designed to test the mixing of hyperparameters ? while sampling from
the joint posterior (3). All of the discussed approaches except Algorithm 1 update the latent
variables f as a side-effect. However, further transition operators for the latent variables for
fixed hyperparameters are required. In Algorithm 2 the ?whitened? variables ? remain fixed;
the latent variables and hyperparameters are constrained to satisfy f = L?? ?. The surrogate
data samplers are ergodic: the full joint posterior distribution will eventually be explored.
However, each update changes the hyperparameters and requires expensive computations
involving covariances. After computing the covariances for one set of hyperparameters, it
makes sense to apply several cheap updates to the latent variables. For every method we
applied ten updates of elliptical slice sampling [1] to the latent variables f between each
hyperparameter update. One could also consider applying elliptical slice sampling to a
reparameterized representation, for simplicity of comparison we do not. Independently of
our work Titsias [13] has used surrogate data like reparameterizations to update latent
variables for fixed hyperparameters.
Methods We implemented six methods for updating Gaussian covariance hyperparameters. Each method used the same slice sampler, as in Algorithm 4, applied to the following
model representations. fixed: fixing the latent function f [14]. prior-white: whitening
with the prior. surr-site: using surrogate data with the noise level set to match the site
posterior (12). We used Laplace approximations for the Poisson likelihood. For classification problems we used moment matching, because Laplace approximations do not work
well [15]. surr-taylor: using surrogate data with noise variance set via Taylor expansion of
the log-likelihood (13). Infinite variances were truncated to a large value. post-taylor and
post-site: as for the surr- methods but a fixed reparameterization based on a posterior
approximation (14).
Binary Classification (Ionosphere) We evaluated four different methods for performing binary GP classification: fixed, prior-white, surr-site and post-site. We applied
these methods to the Ionosphere dataset [16], using 200 training data and 34 dimensions.
We used a logistic likelihood with zero-mean prior, inferring lengthscales as well as signal variance. The -taylor methods reduce to other methods or don?t apply because the
maximum of the log-likelihood is at plus or minus infinity.
Gaussian Regression (Synthetic) When the observations have Gaussian noise the
post-taylor reparameterization of Christensen et al. [9] makes the hyperparameters and
latent variables exactly independent. The random centering of the surrogate data model will
be less effective. We used a Gaussian regression problem to assess how much worse the surrogate data method is compared to an ideal reparameterization. The synthetic data set had
200 input points in 10-D drawn uniformly within a unit hypercube. The GP
? had zero mean,
unit signal variance and its ten lengthscales in (2) drawn from Uniform(0, 10). Observation
noise had variance 0.09. We applied the fixed, prior-white, surr-site/surr-taylor,
and post-site/post-taylor methods. For Gaussian likelihoods the -site and -taylor
methods coincide: the auxiliary noise matches the observation noise (S? = 0.09 I).
Cox process inference We tested all six methods on an inhomogeneous Poisson process
with a Gaussian process prior for the log-rate. We sampled the hyperparameters in (2) and
a mean offset to the log-rate. The model was applied to two point process datasets: 1) a
record of mining disasters [17] with 191 events in 112 bins of 365 days. 2) 195 redwood tree
locations in a region scaled to the unit square [18] split into 25?25 = 625 bins. The results
for the mining problem were initially highly variable. As the mining experiments were also
the quickest we re-ran each chain for 20,000 iterations.
7
fixed
prior?white
Effective samples per likelihood evaluation
surr?site
post?site
surr?taylor
Effective samples per covariance construction
4
4
4
3
3
3
2
2
2
1
1
1
0
ionosphere synthetic
x1.6e?04 x3.3e?04
mining
x4.3e?05
redwoods
x4.8e?04
0
ionosphere synthetic
x2.9e?04 x1.1e?03
mining
x7.4e?04
redwoods
x3.7e?03
post?taylor
Effective samples per second
0
ionosphere synthetic
x7.7e?03 x5.4e?02
mining
x1.2e?01
redwoods
x1.5e?02
Figure 3: The results of experimental comparisons of six MCMC methods for GP hyperparameter
inference on four data sets. Each figure shows four groups of bars (one for each experiment) and the
vertical axis shows the effective number of samples of the complete data likelihood per unit cost.
The costs are per likelihood evaluation (left), per covariance construction (center), and per second
(right). Means and standard errors for 10 runs are shown. Each group of bars has been rescaled for
readability: the number beneath each group gives the effective samples for the surr-site method,
which always has bars of height 1. Bars are missing where methods are inapplicable (see text).
6
Discussion
On the Ionosphere classification problem both of the -site methods worked much better
than the two baselines. We slightly prefer surr-site as it involves less problem-specific
derivations than post-site.
On the synthetic test the post- and surr- methods perform very similarly. We had expected
the existing post- method to have an advantage of perhaps up to 2?3?, but that was not
realized on this particular dataset. The post- methods had a slight time advantage, but
this is down to implementation details and is not notable.
On the mining problem the Poisson likelihoods are often close to Gaussian, so the existing post-taylor approximation works well, as do all of our new proposed methods. The
Gaussian approximations to the Poisson likelihood fit most poorly to sites with zero counts.
The redwood dataset discretizes two-dimensional space, leading to a large number of bins.
The majority of these bins have zero counts, many more than the mining dataset. Taylor
expanding the likelihood gives no likelihood contribution for bins with zero counts, so it
is unsurprising that post-taylor performs similarly to prior-white. While surr-taylor
works better, the best results here come from using approximations to the site-posterior (12).
For unreasonably fine discretizations the results can be different again: the site- reparameterizations do not always work well.
Our empirical investigation used slice sampling because it is easy to implement and use.
However, all of the representations we discuss could be combined with any other MCMC
method, such as [19] recently used for Cox processes. The new surrogate data and post-site
representations offer state-of-the-art performance and are the first such advanced methods
to be applicable to Gaussian process classification.
An important message from our results is that fixing the latent variables and updating
hyperparameters according to the conditional posterior ? as commonly used by GP practitioners ? can work exceedingly poorly. Even the simple reparameterization of ?whitening
the prior? discussed in Section 2.1 works much better on problems where smoothness is
important in the posterior. Even if site approximations are difficult and the more advanced methods presented are inapplicable, the simple whitening reparameterization should
be given serious consideration when performing MCMC inference of hyperparameters.
Acknowledgements
We thank an anonymous reviewer for useful comments. This work was supported in part
by the IST Programme of the European Community, under the PASCAL2 Network of
Excellence, IST-2007-216886. This publication only reflects the authors? views. RPA is a
junior fellow of the Canadian Institute for Advanced Research.
8
References
[1] Iain Murray, Ryan Prescott Adams, and David J.C. MacKay. Elliptical slice sampling.
Journal of Machine Learning Research: W&CP, 9:541?548, 2010. Proceedings of the
13th International Conference on Artificial Intelligence and Statistics (AISTATS).
[2] Radford M. Neal. Slice sampling. Annals of Statistics, 31(3):705?767, 2003.
[3] Deepak K. Agarwal and Alan E. Gelfand. Slice sampling for simulation based fitting
of spatial data models. Statistics and Computing, 15(1):61?69, 2005.
[4] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for machine learning. MIT Press, 2006.
[5] Luke Tierney. Markov chains for exploring posterior distributions. The Annals of
Statistics, 22(4):1701?1728, 1994.
[6] Michalis Titsias, Neil D Lawrence, and Magnus Rattray. Efficient sampling for Gaussian
process inference using control variables. In Advances in Neural Information Processing
Systems 21, pages 1681?1688. MIT Press, 2009.
[7] Simon Duane, A. D. Kennedy, Brian J. Pendleton, and Duncan Roweth. Hybrid Monte
Carlo. Physics Letters B, 195(2):216?222, September 1987.
[8] Radford M. Neal. MCMC using Hamiltonian dynamics. To appear in the Handbook
of Markov Chain Monte Carlo, Chapman & Hall / CRC Press, 2011.
http://www.cs.toronto.edu/~radford/ftp/ham-mcmc.pdf.
[9] Ole F. Christensen, Gareth O. Roberts, and Martin Sk?ald. Robust Markov chain Monte
Carlo methods for spatial generalized linear mixed models. Journal of Computational
and Graphical Statistics, 15(1):1?17, 2006.
[10] Thomas Minka. Expectation propagation for approximate Bayesian inference. In Proceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence (UAI),
pages 362?369, 2001. Corrected version available from
http://research.microsoft.com/~minka/papers/ep/.
[11] Kiam Choo. Learning hyperparameters for neural network models using Hamiltonian
dynamics. Master?s thesis, Department of Computer Science, University of Toronto,
2000. Available from http://www.cs.toronto.edu/~radford/ftp/kiam-thesis.ps.
[12] Mary Kathryn Cowles, Nicky Best, Karen Vines, and Martyn Plummer. R-CODA
0.10-5, 2006. http://www-fis.iarc.fr/coda/.
[13] Michalis Titsias. Auxiliary sampling using imaginary data, 2010. Unpublished.
[14] Radford M. Neal. Regression and classification using Gaussian process priors. In J. M.
Bernardo et al., editors, Bayesian Statistics 6, pages 475?501. OU Press, 1999.
[15] Malte Kuss and Carl Edward Rasmussen. Assessing approximate inference for binary
Gaussian process classification. Journal of Machine Learning Research, 6:1679?1704,
2005.
[16] V. G. Sigillito, S. P. Wing, L. V. Hutton, and K. B. Baker. Classification of radar
returns from the ionosphere using neural networks. Johns Hopkins APL Technical
Digest, 10:262?266, 1989.
[17] R. G. Jarrett. A note on the intervals between coal-mining disasters. Biometrika, 66
(1):191?193, 1979.
[18] Brian D. Ripley. Modelling spatial patterns. Journal of the Royal Statistical Society,
Series B, 39:172?212, 1977.
[19] Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian
Monte Carlo methods. Journal of the Royal Statistical Society. Series B (Methodological), 2011. To appear.
9
| 4114 |@word mild:1 cox:3 version:2 confirms:1 simulation:2 crucially:1 covariance:30 decomposition:2 minus:1 moment:3 initial:3 configuration:1 contains:1 series:2 tuned:1 existing:3 imaginary:1 elliptical:4 current:17 com:1 must:5 john:1 informative:1 cheap:1 treating:1 designed:1 update:14 alone:1 generative:5 intelligence:2 parameterization:1 hamiltonian:5 short:2 core:1 record:1 lr:13 parameterizations:1 toronto:4 location:1 readability:1 height:1 fitting:1 coal:1 surr:12 excellence:1 expected:1 dist:3 bility:1 spherical:1 riemann:1 automatically:2 little:2 cpu:1 estimating:1 notation:2 baker:1 developed:1 proposing:1 transformation:1 pseudo:3 fellow:1 every:1 bernardo:1 exactly:1 biometrika:1 scaled:1 control:1 unit:4 appear:2 positive:1 before:1 tends:1 limit:6 plus:1 quickest:1 suggests:1 luke:1 range:1 jarrett:1 definite:1 implement:3 x3:2 procedure:5 empirical:1 discretizations:1 convenient:1 matching:4 integrating:1 prescott:2 close:4 operator:7 applying:3 www:3 deterministic:1 demonstrated:1 dz:1 center:3 missing:1 reviewer:1 williams:1 independently:2 focused:1 sharpness:1 ergodic:1 simplicity:1 iain:2 rule:3 reparameterization:14 laplace:4 updated:2 limiting:1 controlling:1 target:4 construction:3 user:1 annals:2 carl:2 us:1 kathryn:1 trend:1 expensive:2 updating:8 ep:1 vine:1 region:3 rescaled:1 ran:2 pd:1 ham:1 complexity:1 reparameterizations:4 dynamic:2 radar:1 depend:2 algebra:1 titsias:3 inapplicable:2 calderhead:1 f2:3 efficiency:1 joint:9 various:1 derivation:1 describe:1 lengthscale:8 monte:8 effective:8 artificial:2 ole:1 plummer:1 pendleton:1 lengthscales:7 quite:1 gelfand:2 apparent:1 solve:1 plausible:4 supplementary:1 drawing:1 statistic:6 neil:1 gp:7 jointly:3 noisy:1 itself:1 advantage:2 propose:3 fr:1 aligned:1 loop:1 beneath:1 rapidly:1 mixing:3 poorly:3 nicky:1 requirement:1 p:1 assessing:1 produce:1 adam:2 leave:4 ben:1 ftp:2 develop:1 martyn:1 fixing:4 ij:2 school:1 edward:2 strong:4 auxiliary:12 implemented:1 involves:1 indicate:1 come:1 quantify:1 c:2 girolami:1 inhomogeneous:1 material:1 implementing:1 bin:5 crc:1 require:1 fix:1 preliminary:1 investigation:1 anonymous:1 brian:2 ryan:2 exploring:2 con5:1 around:3 considered:1 magnus:1 normal:1 exp:2 hall:1 seed:1 lawrence:1 reserve:1 substituting:1 vary:1 applicable:1 individually:1 create:1 reflects:1 hope:1 mit:2 gaussian:45 always:4 rather:1 avoid:1 publication:1 derived:1 focus:1 methodological:1 modelling:1 likelihood:36 baseline:2 sense:1 burdensome:1 inference:8 softly:1 accept:3 initially:1 fis:1 transformed:1 rpa:1 overall:1 classification:10 augment:1 priori:1 smoothing:1 integration:1 fairly:1 constrained:1 marginal:2 equal:1 construct:1 mackay:1 art:1 spatial:3 sampling:31 chapman:1 x4:2 look:1 nearly:1 thinking:1 serious:1 few:2 randomly:1 resulted:1 argmax:1 microsoft:1 proba:1 acceptance:7 interest:1 message:1 possibility:2 highly:2 workable:1 mining:9 evaluation:3 introduces:1 extreme:1 bracket:3 undefined:1 chain:14 tree:1 taylor:17 re:1 fitted:1 roweth:1 xeon:1 cost:3 straighter:1 uniform:7 too:3 unsurprising:1 dependency:2 synthetic:6 combined:1 density:5 international:1 probabilistic:2 physic:1 informatics:1 picking:1 hopkins:1 quickly:1 squared:2 again:2 thesis:2 choose:2 slowly:1 worse:1 derivative:1 leading:1 return:8 wing:1 li:3 centred:1 summarized:2 sec:2 satisfy:1 notable:1 depends:2 vi:2 performed:2 root:4 view:1 linked:1 bayes:1 simon:1 contribution:1 ass:1 square:5 variance:8 characteristic:1 weak:3 bayesian:3 carlo:8 kennedy:1 straight:1 kuss:1 explain:1 cumbersome:1 centering:2 grossly:1 minka:2 associated:1 sampled:2 dataset:4 popular:2 ou:1 actually:1 day:1 follow:1 specify:2 evaluated:1 shrink:3 strongly:3 rejected:1 hand:1 hastings:6 christopher:1 propagation:2 defines:1 mode:1 logistic:2 gray:1 perhaps:1 mary:1 effect:1 requiring:1 true:2 analytically:2 neal:3 illustrated:3 white:5 x5:1 width:2 coda:3 generalized:2 pdf:1 complete:2 performs:1 cp:1 fj:1 pro:1 consideration:1 fi:5 recently:1 common:3 empirically:1 stepping:1 exponentially:1 discussed:5 tail:1 slight:1 numerically:1 smoothness:1 tuning:6 similarly:2 pointed:1 had:5 longer:1 whitening:5 multivariate:2 posterior:29 recent:2 manipulation:1 binary:4 seen:1 minimum:1 converting:1 converge:1 determine:1 tempting:1 signal:3 ii:4 full:2 mix:1 reduces:3 alan:1 technical:2 match:2 adapt:1 offer:1 long:1 post:15 prediction:1 variant:1 regression:6 involving:1 whitened:4 expectation:2 poisson:5 metric:1 iteration:3 kernel:1 disaster:2 agarwal:2 proposal:14 whereas:1 conditionals:1 uninformative:1 fine:1 interval:1 else:5 standpoint:1 extra:1 rest:1 comment:1 goto:1 seem:1 practitioner:1 near:1 ideal:4 canadian:1 split:2 easy:3 enough:1 variety:1 xj:1 variate:4 fit:4 identified:1 opposite:1 inner:1 reduce:1 six:3 utility:1 karen:1 questioned:1 dramatically:1 chol:2 ignored:1 useful:1 tune:1 amount:1 ten:3 ph:11 simplest:1 http:4 specifies:1 restricts:1 per:7 rattray:1 hyperparameter:9 group:3 ist:2 four:4 threshold:1 drawn:5 tierney:1 changing:1 neither:1 thresholded:1 run:2 parameterized:1 letter:1 uncertainty:1 master:1 throughout:2 family:1 draw:14 acceptable:2 prefer:1 duncan:1 followed:1 replaces:1 annual:1 badly:1 infinity:1 worked:1 n3:1 flat:2 x2:1 equilibrate:1 x7:2 simulate:1 reparameterize:1 min:5 performing:2 martin:1 department:1 according:3 remain:2 slightly:1 appealing:1 metropolis:6 making:2 christensen:5 invariant:4 previously:1 discus:2 count:4 eventually:1 available:2 multiplied:1 apply:5 discretizes:1 hierarchical:1 generic:2 alternative:1 struct:1 original:3 thomas:1 unreasonably:1 michalis:2 include:1 graphical:1 giving:1 restrictive:1 murray:3 especially:1 build:1 approximating:2 hypercube:1 society:2 implied:4 move:3 realized:1 digest:1 strategy:1 burning:1 diagonal:3 surrogate:26 said:1 september:1 distance:1 thank:1 majority:1 manifold:1 considers:1 code:2 ratio:4 difficult:2 hmc:2 robert:1 reweight:1 trace:1 negative:1 implementation:2 unknown:6 perform:1 allowing:1 vertical:1 observation:14 markov:11 datasets:1 behave:1 truncated:1 reparameterized:4 langevin:1 redwood:5 arbitrary:3 community:1 introduced:1 david:1 unpublished:1 required:4 specified:1 junior:1 z1:1 elapsed:1 narrow:1 alternately:3 bar:5 suggested:1 pattern:1 regime:1 max:3 royal:2 explanation:1 pascal2:1 power:1 event:1 malte:1 hybrid:2 marginalised:1 advanced:3 improve:2 realvalued:1 axis:2 conventionally:1 coupled:3 text:1 prior:27 literature:1 acknowledgement:1 apl:1 probit:1 mixed:2 filtering:1 proportional:1 editor:1 supported:1 free:2 rasmussen:2 guide:1 allow:1 side:1 institute:1 wide:1 deepak:1 edinburgh:1 slice:19 distributed:2 ghz:1 xn:1 transition:6 dimension:1 exceedingly:1 ignores:1 author:1 commonly:2 adaptive:1 coincide:1 simplified:1 programme:1 far:1 approximate:5 keep:3 uai:1 handbook:1 assumed:1 xi:1 factorize:1 alternatively:1 don:1 ripley:1 search:1 latent:46 sk:1 why:1 hutton:1 robust:4 expanding:2 expansion:2 european:1 aistats:1 noise:15 hyperparameters:41 complementary:1 x1:4 site:21 intel:1 slow:1 lc:1 inferring:2 position:2 wish:1 deterministically:2 exponential:2 lie:1 jacobian:2 down:1 specific:1 explored:1 offset:1 ionosphere:7 intractable:1 conditioned:1 easier:2 scalar:1 applies:2 radford:5 duane:1 determines:1 ald:1 gareth:1 prop:1 conditional:10 viewed:1 choo:1 careful:3 towards:1 change:2 included:1 infinite:1 except:2 uniformly:1 corrected:1 sampler:7 principal:1 called:1 total:1 squaredexponential:1 accepted:2 experimental:2 cholesky:1 mark:1 incorporate:1 dept:1 mcmc:6 tested:1 cowles:1 |
3,440 | 4,115 | Supervised Clustering
Reza Bosagh Zadeh
Stanford University
[email protected]
Pranjal Awasthi
Carnegie Mellon University
[email protected]
Abstract
Despite the ubiquity of clustering as a tool in unsupervised learning, there is not
yet a consensus on a formal theory, and the vast majority of work in this direction
has focused on unsupervised clustering. We study a recently proposed framework
for supervised clustering where there is access to a teacher. We give an improved
generic algorithm to cluster any concept class in that model. Our algorithm is
query-efficient in the sense that it involves only a small amount of interaction
with the teacher. We also present and study two natural generalizations of the
model. The model assumes that the teacher response to the algorithm is perfect.
We eliminate this limitation by proposing a noisy model and give an algorithm for
clustering the class of intervals in this noisy model. We also propose a dynamic
model where the teacher sees a random subset of the points. Finally, for datasets
satisfying a spectrum of weak to strong properties, we give query bounds, and
show that a class of clustering functions containing Single-Linkage will find the
target clustering under the strongest property.
1 Introduction
Clustering has traditionally been a tool of unsupervised learning. Despite widespread usage across
several fields there is not yet a well-established theory to describe clustering [ABD09, AL10, Blu09,
GvLW09]. Recently, Balcan and Blum [BB08] proposed a supervised model of clustering, where
there is access to a teacher. We further explore the implications of their model and extend it in several
important directions. As a motivating example, consider Google News, where news documents are
gathered from the web and need to be clustered into groups, each corresponding to a particular news
story. In this case, it is clear to the human eye (the teacher) which group each document should
belong to, but the sheer number of articles makes clustering by hand prohibitive. In this case, an
algorithm can interact with the teacher to aid in clustering the documents without asking too much
of the teacher.
Traditional approaches to clustering optimize some objective function, like the k-means or the kmedian, over the given set of points [KVV00, CGTS99]. These approaches work under the implicit
assumption that by minimizing a certain objective function one can reach close to the underlying
ground truth clustering. Alternatively, another line of work makes strong assumptions on the nature of the data. One popular in literature is the assumption that data is coming from a mixture of
Gaussians [Das99]. However when dealing with web-pages, documents etc. it is not very clear if
these assumptions are reasonable. In fact, there might be no principled way to reach the target clustering which a teacher has in mind without actually interacting with him/her. For example consider
documents representing news articles. These documents could be clustered as {politics, sports,
entertainment, other}. However, this is just one of the many possible clusterings. The clustering
{entertainment + sports, politics, other} is equally likely apriori. Or perhaps the teacher would
like these articles to be clustered into {news articles} vs. {opinion pieces}. These scenarios motivate the need to consider the problem of clustering under feedback. Recently, there has been an
interest in investigating such models and to come up with a more formal theoretical framework for
analyzing clustering problems and algorithms. One such framework was proposed by Balcan and
1
Blum [BB08] who, motivated by different models for learning under queries, proposed a model for
clustering under queries.
The model is similar to the Equivalence Query(EQ) model of learning [Ang98] but with a different kind of feedback. We assume that the given set S of m points belongs to k target clusters
{c1 , c2 , . . . , ck }, where each cluster is defined by some concept c belonging to a concept class C.
For example, the points belonging to the cluster c1 will be the set {x ? S|c1 (x) = 1}. We also
assume that each point belongs to exactly one of the k clusters. As in the EQ model of learning,
the algorithm presents a hypothesis clustering {h1 , h2 , . . . , hk0 } to the teacher. If the clustering is
incorrect the algorithm gets some feedback from the teacher. However, the feedback in this case is
different from the one in the EQ model. In the learning model, the algorithm gets a specific point x
as a counter-example to its proposed hypothesis. For clustering problems this may not a very natural
form of feedback. In a realistic scenario, the teacher can look at the clustering proposed and give
some limited feedback. Hence, the model in [BB08] considers the following feedback: If there is a
cluster hi which contains points from two or more target clusters, then the teacher can ask the algorithm to split that cluster by issuing the request split(hi ). Note that the teacher does not specify how
the cluster hi should be split. If there are clusters hi and hj such that hi ? hj is a subset of one of the
target clusters, then the teacher can ask the algorithm to merge these two clusters by issuing the request merge(hi , hj ). The goal of the algorithm is to be query efficient ? O(poly(k, log m, log |C|))
queries, and computationally efficient ? running time of O(poly(k, m, log |C|)). Notice, that if we
allow the algorithm to use the number of queries linear in m, then there is a trivial algorithm, which
starts with all the points in separate clusters and then merges clusters as requested by the teacher.
One could also imagine applying this split-merge framework to cases where the optimal clustering
does not necessarily belong to a natural concept class, but instead satisfies some natural separation conditions (ex., large margin conditions). We also study and present results for such problem
instances.
1.1 Contributions
In their paper, Balcan and Blum [BB08] gave efficient clustering algorithms for the class of intervals
and the class of disjunctions over {0, 1}n. We extend those results by constructing an algorithm for
clustering the class of axis parallel rectangles in d dimensions. Our algorithm is computationally
efficient(for constant d) and uses a small number of queries. We generalize our algorithm to cluster
the class of hyperplanes in d dimensions with known slopes. Balcan and Blum [BB08] also gave a
generic algorithm for any finite concept class C, which uses O(k 3 log |C|) queries. We reduce the
query complexity of the generic algorithm from O(k 3 log |C|) to O(k log |C|). Furthermore, the
new algorithm is much simpler than the one from [BB08]. We study two natural generalization of
the original model. In the original model the teacher is only allowed to merge two clusters hi and
hj if hi ? hj is a subset of one of the target clusters. We consider a noise tolerant version of this in
which the teacher can ask the algorithm to merge hi and hj if both the clusters have at least some
fixed fraction of points belonging to the same target cluster. This is a more natural model since we
allow for the teacher requests to be imperfect.
In the original model we assume that the teacher has access to all the points. In practice, we are
interested in clustering a large domain of points and the teacher might only have access to a random
subset of these points at every step. For example, in the case of clustering news documents, our goal
is to figure out the target clustering which reflects the teacher preferences. But the teacher sees a
small fresh set of news articles very day. We propose a model which takes into account the fact that
at each step the split and merge requests might be on a different set of points. In both the above
models the straight forward algorithm for clustering the class of intervals fails. We develop new
algorithms for clustering intervals in both the models.
We also apply the split-merge framework of [BB08] to datasets satisfying a spectrum of weak to
strong properties and design algorithms for clustering such data sets. Along the way, we also show
that a class of clustering functions containing Single-Linkage will find the target clustering under
the strict threshold property (Theorem 6.1).
2
2 The model
We consider the model proposed by Balcan and Blum [BB08]. The clustering algorithm is given a
set S of m points. Each point belongs to one of the k clusters. Each cluster is defined by a function
f ? C, where C is a concept class. The goal of the algorithm is to figure out the correct clustering
by interacting with the teacher as follows:
1. The algorithm proposes a hypothesis clustering {h1 , h2 , . . . , hJ } to the teacher.
2. The teacher can request split(hi ) if hi contains points from two or more target clusters.
The teacher can request merge(hi , hj ) if hi ? hj is a subset of one of the target clusters.
The assumption is that there is no noise in the teacher response. The goal is to use as few queries to
the teacher as possible. Ideally, we would like the number of queries to be poly(k, log m, log |C|).
2.1 A generic algorithm for learning any finite concept class
We reduce the query complexity of the generic algorithm for learning any concept class [BB08],
from O(k 3 log |C|) to O(k log |C|). In addition our algorithm is simpler than the original one. The
new algorithm is described below.
Given m points let V S = { the set of all possible k clusterings of the given points using concepts
in C}. Notice that |V S| ? |C|k . Given a set h ? S of points we say that a given clustering R
is consistent with h if h appears as a subset of one of the clusters in R. Define, V S(h) = {R ?
V S|R is consistent with h.}. At each step the algorithm outputs clusters as follows:
Initialize i = 1.
Find the largest set of points hi , s.t. |V S(hi )| ? 12 |V S|.
Output hi as a cluster.
Set i = i + 1 and repeat steps 1-3 on the remaining points until every point has been
assigned to some cluster.
5. Present the clustering {h1 , h2 , . . . , hJ } to the teacher.
1.
2.
3.
4.
If the teacher says split(hi ), remove all the clusterings in V S which are consistent with hi If the
teacher says merge(hi , hj ) , remove all the clusterings in V S which are inconsistent with hi ? hj .
Theorem 2.1. The generic algorithm can cluster any finite concept class using at most k log |C|
queries.
Proof. At each request, if the teacher says split(hi ), then all the clusterings consistent with hi are
removed, which by the construction followed by the algorithm will be at least half of |V S|. If the
teacher says merge(hi , hj ), i < j, then all the clusterings inconsistent with hi ? hj are removed.
This set will be at least half of |V S|, since otherwise the number of clusterings consistent with
hi ? hj will be more than half of |V S| which contradicts the maximality of hi . Hence, after each
query at least half of the version space is removed. From the above claim we notice that the total
number of queries will be at most log |V S| ? log|C|k ? k log |C|.
The analysis can be improved if the VC-dimension d of the concept class C is much smaller than
k
log |C|. In this case the size of V S can be bounded from above by C[m] , where C[m] is the
number of ways to split m points using concepts in C. Also from Sauer?s lemma[Vap98] we know
that C[m] ? md . Hence, we get |V S| ? mkd . This gives a query complexity of O(kd log m).
3 Clustering geometric concepts
We now present an algorithm for clustering the class of rectangles in 2 dimensions. We first present
a simple but less efficient algorithm for the problem. The algorithm uses O((k log m)3 ) queries
and runs in time poly(k, m). In the appendix, we show that the query complexity of the algorithm
can be improved to O((k log m)2 ). Our algorithm generalizes in a natural way to rectangles in d
dimensional space, and to hyperplanes in d dimensions with known slopes.
3
3.1 An algorithm for clustering rectangles
Each rectangle c in the target clustering can be described by four points (ai , aj ), (bi , bj ) such that
(x, y) ? ck iff ai < x < aj and bi < y < bj . Hence, corresponding to any k-clustering there are at
most 2k points a1 , a2 , . . . , a2k on the x-axis and at most 2k points b1 , b2 , . . . , b2k on the y-axis. We
call these points the target points. The algorithm works by finding these points. During its course
the algorithm maintains a set of points on the x-axis and a set of points on the y-axis. These points
divide the entire space into rectangular regions. The algorithm uses these regions as its hypothesis
clusters. The algorithm is sketched below:
1. Start with points (astart 0 , aend 0 ) on the x-axis and points (bstart 0 , bend 0 ), such that all the
points are contained in the rectangle defined by these points.
2. At each step, cluster the m points according to the region in which they belong. Present
this clustering to the teacher.
3. On a merge request, simply merge the two clusters.
4. On a split of (ai 0 , aj 0 ), (bi 0 , bj 0 ), create a new point ar 0 such that ai 0 < ar 0 < aj 0 , and the
projection of all the points onto (ai 0 , aj 0 ) is divided into half by ar 0 . Similarly, create a
new point br 0 such that bi 0 < br 0 < bj 0 , and the projection of all the points onto (bi 0 , bj 0 ) is
divided into half by br 0 . Abandon all the merges done so far.
Theorem 3.1. The algorithm can cluster the class of rectangles in 2 dimensions using at most
O((k log m)3 ) queries.
Proof. Lets first bound the total number of split requests. If the teacher says split on
(xi , xj ), (yi , yj ), then we know that either (xi , xj ) contains a target point a or (yi , yj ) contains
a target point b or both. By creating two splits we are ensuring that the size of at least one of the
regions containing a target point is reduced by half. There are at most 2k intervals on the x-axis and
at most 2k intervals on the y-axis. Hence, the total number of split requests is ? 4k log m. Now
lets bound the merge requests. Between any two split requests the total number of merge requests
will be at most the total number of regions which is ? O((k log m)2 ). Since, t points on the x and
the y axis can create at most t2 regions, we get that the total number of merge requests is at most
? O(k log m)3 . Hence, the total number of queries made by the algorithm is O((k log m)3 ).
If we are a bit more careful, we can avoid redoing the merges after every split and reduce the query
complexity to O((k log m)2 ). So, for rectangles we have the following result1 .
Theorem 3.2. There is an algorithm which can cluster the class of rectangles in 2 dimensions using
at most O((k log m)2 ) queries.
We can also generalize this algorithm to work for rectangles in a d-dimensional space. Hence, we
get the following result
Corollary 3.3. There is an algorithm which can cluster the class of rectangles in d dimensions using
at most O((kd log m)d ) queries.
Corollary 3.4. There is an algorithm which can cluster the class of hyperplanes in d dimensions
having a known set of slopes of size at most s, using at most O((kds log m)d ) queries.
4 Dynamic model
We now study a natural generalization of the original model. In the original model we assume
that the teacher has access to the entire set of points. In practice, this will rarely be the case. For
example, in the case of clustering news articles, each day the teacher sees a small fresh set of articles
and provides feedback. Based on this the algorithm must be able to figure out the target clustering
for the entire space of articles. More formally, let X be the space of all the points. There is a target
k clustering for these points, where cluster corresponds to a concept in a concept class C. At each
step, the world picks m points and the algorithm clusters these m points and presents the clustering
to the teacher. If the teacher is unhappy with the clustering he may provide feedback. Note that
1
Proof is omitted due to space constraints
4
the teacher need not provide feedback every time the algorithm proposes an incorrect clustering.
The goal of the algorithm is to minimize the amount of feedback necessary to figure out the target
clustering. Notice that at each step the algorithm may get a fresh set of m points. We assume that
the requests have no noise and the algorithm has access to all the points in X. We now give an
algorithm for learning intervals in this model.
4.1 An algorithm for clustering intervals
We assume that the space X is discretized into n points. Let us assume that there exist points {a1 , a2 , . . . , ak+1 }, on the x-axis such that the target clustering is the intervals
{[a1 , a2 ], [a2 , a3 ], . . . , [ak , ak+1 ]}. The algorithm maintains a set of points on the x-axis and uses
the intervals induced by them as its hypothesis. Also each interval is associated with a state of
marked/unmarked. When a new interval is created, it is always unmarked. An interval is marked
if we know that none of the points(ai ?s) in the target clustering can be present in that interval. The
algorithm is sketched below:
1. Start with one unmarked interval containing all the points in the space.
2. Given a set of m points, first form preliminary clusters h1 , . . . , hJ such that each cluster
corresponds to an interval. Next output the final clusters as follows:
? set i=1
? If hi and hi+1 correspond to adjacent intervals at least one of them is unmarked, then
output hi ? hi+1 and set i = i + 2. Else output hi and set i = i + 1.
3. On a split request, split every unmarked interval in the cluster in half.
4. On a merge request, mark every unmarked contained in the cluster.
Theorem 4.1. The algorithm can cluster the class of intervals using at most O(k log n) mistakes.
Proof. Notice that by our construction, every cluster will contain at most 2 unmarked intervals. Lets
first bound the total number of split requests. For every point ai in the target clustering we define
two variables lef t size(ai ) and right size(ai ). If ai is inside a hypothesis interval [x, y] then
lef t size(ai ) = number of points in [x, ai ] and right size(ai ) = number of points in [ai , y]. If
ai is also a boundary point in the hypothesis clustering ([x, ai ], [ai , y]) then again lef t size(ai ) =
number of points in [x, ai ] and right size(ai ) = number of points in [ai , y]. Notice, that every
split request reduces either the lef t size or the right size of some boundary point by half. Since
there are at most k boundary points in the target clustering, the total number of split requests is
? O(k log n) times. Also note that the number of unmarked intervals is at most O(k log n) since,
unmarked intervals increase only via split requests. On every merge request either an unmarked
interval is marked or two marked intervals are merged. Hence, the total number of merge requests is
atmost twice the number of unmarked intervals ? O(k log n). Hence, the total number of mistakes
is ? O(k log n).
Its easy to notice that the generic algorithm for learning any finite concept class in the original model
also works in this model. Hence, we can learn any finite concept class in this model using at most
k log |C| queries.
5 ? noise model
The previous two models assume that there is no noise in the teacher requests. This is again an
unrealistic assumption since we cannot expect the teacher responses to be perfect. For example,
if the algorithm proposes a clustering in which there are two clusters which are almost pure,i.e., a
large fraction of the points in both the clusters belong to the same target clusters, then there is a
good chance that the teacher will ask the algorithm to merge these two clusters, especially if the
teacher has access to the clusters through a random subset of the points. In this section we study a
model which removes this assumption. For simplicity, we consider the noisy version of the original
model [BB08]. As in the original model, the algorithm has m points. At each step, the algorithm
proposes a clustering {h1 , h2 , . . . , hJ } to the teacher and the teacher provides feedback. But now,
the feedback is noisy in the following sense
5
1. Split: As before the teacher can say split(hi ), if hi contains points from more than one
target clusters.
2. Merge: The teacher can say merge(hi , hj ), if hi and hj each have at least one point from
some target cluster.
It turns out that handling arbitrary noise is difficult. The following Theorem (proof omitted) shows
a counter-example.
Theorem 5.1. Consider m points on a line and k = 2. Any clustering algorithm must use ?(m)
queries in the worst case to figure out the target clustering in the noisy model.
Hence, we now consider a relaxed notion of noise. If the teacher says merge(hi , hj ) then we assume
that at least a constant ? fraction of the points in both the clusters, belong to a single target cluster.
Under this model of noise we now give an algorithm for learning k-intervals.
5.1 An algorithm for clustering intervals
The algorithm is a generalization of the interval learning algorithm in the original model. The main
idea is that when the teacher asks to merge two intervals (ai , aj ) and (aj , ak ), then we know than
at least ? fraction of the portion to the left and the right of aj is pure. Hence, the algorithm can
still make progress. As the algorithm proceeds it is going to mark certain intervals as ?pure? which
means that all the points in that interval belong to the same cluster. More formally the algorithm is
as follows
1. Start with one interval [astart 0 , aend 0 ] containing all the points.
2. At each step, cluster the points using the current set of intervals and present that clustering
to the teacher.
3. On split request : Divide the interval in half.
4. On a merge request
? If both the intervals are marked ?pure?, merge them.
? If both the intervals are unmarked, then create 3 intervals where the middle interval
contains ? fraction of the two intervals. Also make the middle interval as ?pure?.
? If one interval is marked and one is unmarked, then shift the boundary between the
two intervals towards the unmarked interval by a fraction of ?.
1
Theorem 5.2. The algorithm clusters the class of intervals using at most O(k(log 1??
m)2 ).
Proof. We will call a merge request, as ?impure? if it involves at least one impure interval,i.e., an
interval which contains points from two or more clusters. Else we will call it as ?pure?. Notice
that every split and impure merge request makes progress, i.e. the size of some target interval is
1
reduced by at least ?. Hence, the total number of split + impure merge requests ? k log 1??
m.
1
m, since only split requests
We also know that the total number of unmarked intervals ? k log 1??
increase the unmarked intervals. Also, total number of marked intervals ? total number of unmarked
intervals, since every marked interval can be charged to a split request. Hence, the total number of
1
intervals ? 2k log 1??
m.
To bound the total number of pure merges, notice that every time a pure merge is made, the size
of some interval decreases by at least an ? fraction. The size of an interval can decrease at most
1
1
log 1??
m times. Hence, the total number of pure merges ? k(log 1??
m)2 .
1
m)2 ) queries.
Hence, the algorithm makes at most O(k(log 1??
6 Properties of the Data
We now adapt the query framework of [BB08] to cluster datasets which satisfy certain natural separation conditions with respect to the target partitioning. For this section, sometimes we write
d = he1 , e2 , . . . , e(n) i to mean the set of distances that exist between all pairs of n points. This
2
6
list is always ordered by increasing distance. For a definition of the Single-Linkage and Min-Sum
clustering functions, please see the appendix.
6.1 Threshold Separation
We introduce a (strong) property that may be satisfied by d = he1 , e2 , . . . , e(n) i with respect to ?,
2
the target clustering. It is important to note that this property is imposing restrictions on d, defined
by the data. An inner edge of ? is a distance between two points inside a cluster, while an outer edge
is a distance between two points in differing clusters.
S TRICT T HRESHOLD S EPARATION . There exists a threshold t > 0 such that all inner edges of ?
have distance less than or equal t, and all outer edges have distance greater than t.
In other words, the pairwise distances between the data are such that all inner edges of d (w.r.t.
?) have distance smaller than all outer edges (again, w.r.t. ?). This property gives away a lot of
information about ?, in that it allows Single-Linkage to fully recover ? as we will see in theorem
6.1. Before we present the algorithm to interact with the teacher, Theorem 6.1 will be useful (proof
omitted).
[Kle03, JS71] introduce the following 3 properties which a clustering function can satisfy. An
F (d, k)-transformation of d is a change to d such that inner-cluster distances in d are decreased, and
outer-cluster distances are increased.
1. C ONSISTENCY. Fix k. Let d be a distance function, and d0 be a F (d, k)-transformation of
d. Then F (d, k) = F (d0 , k)
2. O RDER -C ONSISTENCY. For any two distance functions d and d0 , number of clusters k, if
the order of edges in d is the same as the order of edges in d0 , then F (d, k) = F (d0 , k)
3. k-R ICHNESS . For any number of clusters k, Range(F (?, k)) is equal to the set of all kpartitions of S
Theorem 6.1. Fix k and a target k-partitioning ?, and let d be a distance function satisfying Strict
Threshold Separation w.r.t. ?. Then for any Consistent, k-Rich, Order-Consistent partitioning function F , we have F (d, k) = ?.
Note that since Single-linkage is Consistent, k-Rich, and Order-Consistent [ZBD09], it immediately
follows that SL(d, k) = ? - in other words, SL is guaranteed to find the target k-partitioning,
but we still have to interact with the teacher to find out k. It is a recently resolved problem that
Single-Linkage is not the only function satisfying the above properties [ZBD], so the the class
of Consistent, k-Rich, and Order-Consistent functions has many members. We now present the
algorithm to interact with the teacher.
Theorem 6.2. Given a dataset satisfying Strict Threshold Separation, there exists an algorithm
which can find the target partitioning for any hypothesis class in O(log(n)) queries
Proof. Note that the threshold t and the number of clusters k are not known to the algorithm, else
the target could be found immediately. By theorem 6.1, we know that the target must be exactly
what Single-Linkage returns for some k, and it remains to find the number of clusters. This can be
done using a binary search on the number of clusters which can vary from 1 to n. We start with
some candidate k, and if the teacher tells us to split anything, we know the number of clusters must
be larger, and if we are told to merge, we know the number of clusters must be smaller. Thus we can
find the correct number of clusters in O(log(n)) queries.
Note that since strict threshold separation implies strict separation, then the O(k) algorithm presented in the next section can also be used, giving O(min(log(n), k)) queries.
Strict Separation: Now we relax strict threshold separation
S TRICT S EPARATION . All points in the same cluster are more similar to one another than to points
outside the cluster.
7
With this property, it is no longer true that all inner distances are smaller than outer distances, and
therefore Theorem 6.1 does not apply. However, [BBV08] prove the following lemma
Lemma 6.3. [BBV08] For a dataset satisfying strict separation, let SL(d) be the tree returned by
Single-Linkage. Then any partitioning respecting the strict separation of d will be a pruning of
SL(d).
Theorem 6.4. Given a dataset satisfying Strict Separation, there exists an algorithm which can find
the target partitioning for any hypothesis class in O(k) queries
Proof. Let the distances between points be represented by the distance function d. By lemma 6.3 we
know that the target partitioning must be a pruning of SL(d). Our algorithm will start by presenting
the teacher with all points in a single cluster. Upon a split request, we split according to the relevant
node in SL(d). There can be no merge requests since we always split perfectly. Each split will create
a new cluster, so there will be at most k ? 1 of these splits, after which the correct partitioning is
found.
?-margin Separation: Margins show up in many learning models, and this is no exception. A
natural assumption is that there may be a separation of at least ? between points in differing clusters,
where the points all lie inside the unit ball.
?- MARGIN S EPARATION . Points in different clusters of the target partitioning are at least ? away
from one another.
With this property, we can prove the following for all hypothesis classes
Theorem 6.5. Given a dataset satisfying ?-margin Separation,
there exists an algorithm which can
?
d d
find the target partitioning for any hypothesis class in O(( ? ) ? k) queries
Proof. We split the unit ball (inside which all points live) into hypercubes with edge length ??d . We
are interested
diameter of such a hypercube. The diameter of a d-dimensional hypercube with
? in the
?
?
?
?
side d is d ? d = ?, so no two points inside a hypercube of side ??d can be more than ? apart.
It follows that if split the unit ball up using a grid of hypercubes, all points inside a hypercube must
be from the same cluster. We say such a hypercube is ?pure?.
?
There are at most O(( ?d )d ) hypercubes in a unit ball. We show each hypercube as a single cluster
to the teacher.
Since all hypercubes are pure, we can only get merge requests, of which there can be
?
at most O(( ?d )d ? k) until the target partitioning is found.
7 Conclusions and open problems
In this paper we investigated a recently proposed model of clustering under feedback. We gave algorithms for clustering geometric concepts in the model. For datasets satisfying a spectrum of weak to
strong properties, we gave query bounds, and showed that a class of clustering functions containing
Single-Linkage will find the target clustering under the strongest property. We also studied natural
generalizations of the model and gave efficient algorithms for learning intervals in the new models.
Several interesting problems remain
1. Give algorithms for clustering other classes of functions, for example linear separators in
the original model.
2. Give efficient algorithms for clustering geometric concept classes in the new models.
3. Establish connections between the proposed models and the Equivalence Query model of
learning.
4. In [BB08], the authors give an algorithm for learning the class of disjunctions. It would be
interesting to come up with an attribute efficient version of the algorithm, similar in spirit
to the Winnow algorithm [Lit87].
8
References
[ABD09]
M. Ackerman and S. Ben-David. Clusterability: A theoretical study. Proceedings of
AISTATS-09, JMLR: W&CP, 5:1?8, 2009.
[AL10]
Ben-David S. Ackerman, M. and D. Loker. Characterization of Linkage-based Clustering. COLT 2010, 2010.
[Ang98]
[BB08]
D. Angluin. Queries and concept learning. Machine Learning, 2:319?342, 1998.
Maria-Florina Balcan and Avrim Blum. Clustering with interactive feedback. In ALT,
2008.
M.-F. Balcan, A. Blum, and S. Vempala. A discriminative framework for clustering
via similarity functions. In Proceedings of the 40th ACM Symposium on Theory of
Computing, 2008.
Avrim Blum. Thoughts on clustering. In NIPS Workshop on Clustering Theory, 2009.
[BBV08]
[Blu09]
[CGTS99] M. Charikar, S. Guha, E. Tardos, and D. B. Shmoy. A constant-factor approximation
algorithm for the k-median problem. In ACM Symposium on Theory of Computing,
1999.
[Das99]
S. Dasgupta. Learning mixtures of gaussians. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science, 1999.
[GvLW09] I. Guyon, U. von Luxburg, and R.C. Williamson. Clustering: Science or Art? In NIPS
Workshop on Clustering Theory, 2009.
[JS71]
[Kle03]
N. Jardine and R. Sibson. Mathematical taxonomy. New York, 1971.
J. Kleinberg. An impossibility theorem for clustering. In Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference, page 463. The MIT
Press, 2003.
[KVV00]
R. Kannan, S. Vempala, and A. Veta. On clusterings-good, bad and spectral. In FOCS
?00: Proceedings of the 41st Annual Symposium on Foundations of Computer Science,
2000.
[Lit87]
Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning, 2(4), 1987.
[Vap98]
[ZBD]
V. N. Vapnik. Statistical Learning Theory. John Wiley and Sons Inc., 1998.
Reza Bosagh Zadeh and Shai Ben-David. Axiomatic Characterizations of SingleLinkage. In In Submission.
[ZBD09]
Reza Bosagh Zadeh and Shai Ben-David. A Uniqueness Theorem for Clustering. In
Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, 2009.
9
| 4115 |@word middle:2 version:4 open:1 pick:1 asks:1 contains:7 document:7 current:1 yet:2 issuing:2 must:7 john:1 realistic:1 remove:3 v:1 half:10 prohibitive:1 intelligence:1 characterization:2 provides:2 node:1 preference:1 hyperplanes:3 simpler:2 mathematical:1 along:1 c2:1 symposium:4 incorrect:2 prove:2 focs:1 inside:6 introduce:2 pairwise:1 discretized:1 kds:1 increasing:1 abound:1 underlying:1 bounded:1 what:1 kind:1 proposing:1 differing:2 finding:1 transformation:2 every:13 interactive:1 exactly:2 partitioning:12 unit:4 before:2 mistake:2 despite:2 ak:4 analyzing:1 merge:32 might:3 twice:1 studied:1 equivalence:2 jardine:1 limited:1 bi:5 range:1 yj:2 practice:2 thought:1 projection:2 word:2 get:7 onto:2 close:1 cannot:1 bend:1 applying:1 live:1 optimize:1 restriction:1 charged:1 focused:1 rectangular:1 simplicity:1 immediately:2 pure:11 notion:1 traditionally:1 tardos:1 imagine:1 target:42 construction:2 us:5 hypothesis:11 satisfying:9 submission:1 worst:1 region:6 news:8 counter:2 removed:3 decrease:2 principled:1 complexity:5 respecting:1 ideally:1 dynamic:2 motivate:1 upon:1 resolved:1 represented:1 describe:1 query:38 artificial:1 tell:1 outside:1 disjunction:2 stanford:2 larger:1 say:10 relax:1 otherwise:1 noisy:5 abandon:1 final:1 kmedian:1 propose:2 interaction:1 coming:1 ackerman:2 relevant:1 iff:1 cluster:78 perfect:2 bosagh:3 ben:4 develop:1 a2k:1 progress:2 eq:3 strong:5 c:1 involves:2 come:2 implies:1 direction:2 merged:1 correct:3 attribute:2 vc:1 human:1 opinion:1 fix:2 generalization:5 clustered:3 preliminary:1 ground:1 bj:5 claim:1 vary:1 a2:4 omitted:3 uniqueness:1 unhappy:1 singlelinkage:1 axiomatic:1 him:1 largest:1 create:5 tool:2 reflects:1 awasthi:1 mit:1 always:3 ck:2 avoid:1 hj:20 corollary:2 maria:1 impossibility:1 sense:2 eliminate:1 entire:3 her:1 going:1 interested:2 linearthreshold:1 sketched:2 colt:1 proposes:4 art:1 initialize:1 apriori:1 field:1 equal:2 having:1 look:1 unsupervised:3 t2:1 few:1 b2k:1 interest:1 mixture:2 implication:1 edge:9 necessary:1 sauer:1 tree:1 divide:2 littlestone:1 theoretical:2 instance:1 increased:1 asking:1 ar:3 subset:7 guha:1 too:1 motivating:1 teacher:59 hypercubes:4 st:1 told:1 quickly:1 again:3 von:1 satisfied:1 containing:6 creating:1 return:1 account:1 b2:1 redoing:1 inc:1 satisfy:2 piece:1 h1:5 lot:1 portion:1 start:6 recover:1 maintains:2 parallel:1 shai:2 slope:3 contribution:1 minimize:1 who:1 gathered:1 correspond:1 generalize:2 weak:3 none:1 straight:1 strongest:2 reach:2 definition:1 e2:2 proof:10 associated:1 dataset:4 popular:1 ask:4 eparation:3 actually:1 appears:1 supervised:3 day:2 response:3 improved:3 specify:1 done:2 furthermore:1 just:1 implicit:1 until:2 hand:1 web:2 google:1 widespread:1 aj:8 perhaps:1 usage:1 concept:20 contain:1 true:1 hence:16 assigned:1 adjacent:1 during:1 please:1 anything:1 presenting:1 cp:1 balcan:7 recently:5 hreshold:1 reza:3 extend:2 belong:6 he:1 mellon:1 imposing:1 ai:22 grid:1 similarly:1 access:7 longer:1 similarity:1 etc:1 showed:1 winnow:1 belongs:3 apart:1 irrelevant:1 scenario:2 certain:3 binary:1 yi:2 greater:1 relaxed:1 impure:4 reduces:1 d0:5 adapt:1 divided:2 equally:1 a1:3 ensuring:1 florina:1 cmu:1 sometimes:1 c1:3 addition:1 interval:57 decreased:1 else:3 median:1 strict:10 induced:1 member:1 inconsistent:2 spirit:1 call:3 split:38 easy:1 xj:2 gave:5 perfectly:1 reduce:3 imperfect:1 idea:1 inner:5 br:3 shift:1 politics:2 motivated:1 clusterability:1 linkage:10 returned:1 york:1 result1:1 useful:1 clear:2 amount:2 diameter:2 reduced:2 angluin:1 sl:6 exist:2 notice:9 carnegie:1 write:1 dasgupta:1 group:2 sibson:1 four:1 sheer:1 threshold:8 blum:8 rectangle:11 vast:1 fraction:7 sum:1 run:1 luxburg:1 uncertainty:1 almost:1 reasonable:1 guyon:1 separation:15 zadeh:3 appendix:2 bit:1 bound:6 hi:36 followed:1 guaranteed:1 annual:2 constraint:1 kleinberg:1 min:2 vempala:2 charikar:1 according:2 onsistency:2 request:34 ball:4 belonging:3 kd:2 across:1 smaller:4 remain:1 contradicts:1 son:1 bb08:13 handling:1 computationally:2 trict:2 remains:1 turn:1 mind:1 know:9 generalizes:1 gaussians:2 apply:2 away:2 generic:7 ubiquity:1 spectral:1 original:11 assumes:1 running:1 clustering:90 entertainment:2 remaining:1 giving:1 especially:1 establish:1 hypercube:6 objective:2 md:1 traditional:1 distance:17 separate:1 majority:1 outer:5 considers:1 consensus:1 trivial:1 fresh:3 kannan:1 length:1 minimizing:1 loker:1 difficult:1 taxonomy:1 design:1 mkd:1 datasets:4 finite:5 interacting:2 arbitrary:1 david:4 pair:1 connection:1 hk0:1 nick:1 merges:5 established:1 nip:2 able:1 proceeds:1 below:3 unrealistic:1 natural:11 representing:1 maximality:1 eye:1 axis:11 created:1 literature:1 geometric:3 fully:1 expect:1 interesting:2 limitation:1 he1:2 h2:4 foundation:2 consistent:11 article:8 story:1 pranjal:1 course:1 repeat:1 atmost:1 lef:4 formal:2 allow:2 side:2 feedback:15 dimension:9 boundary:4 world:1 rich:3 forward:1 made:2 author:1 far:1 pruning:2 dealing:1 investigating:1 tolerant:1 b1:1 xi:2 discriminative:1 alternatively:1 spectrum:3 search:1 nature:1 learn:1 requested:1 interact:4 williamson:1 investigated:1 poly:4 necessarily:1 constructing:1 domain:1 separator:1 aistats:1 main:1 unmarked:17 noise:8 allowed:1 aid:1 wiley:1 fails:1 candidate:1 lie:1 jmlr:1 theorem:18 vap98:2 bad:1 specific:1 list:1 alt:1 a3:1 exists:4 workshop:2 avrim:2 vapnik:1 margin:5 simply:1 explore:1 likely:1 contained:2 ordered:1 sport:2 corresponds:2 truth:1 satisfies:1 chance:1 acm:2 goal:5 marked:8 careful:1 towards:1 change:1 lemma:4 total:18 rarely:1 formally:2 exception:1 mark:2 das99:2 ex:1 |
3,441 | 4,116 | Online Learning: Random Averages, Combinatorial
Parameters, and Learnability
Alexander Rakhlin
Department of Statistics
University of Pennsylvania
Karthik Sridharan
Toyota Technological Institute
at Chicago
Ambuj Tewari
Computer Science Department
University of Texas at Austin
Abstract
We develop a theory of online learning by defining several complexity measures.
Among them are analogues of Rademacher complexity, covering numbers and fatshattering dimension from statistical learning theory. Relationship among these
complexity measures, their connection to online learning, and tools for bounding
them are provided. We apply these results to various learning problems. We
provide a complete characterization of online learnability in the supervised setting.
1
Introduction
In the online learning framework, the learner is faced with a sequence of data appearing at discrete
time intervals. In contrast to the classical ?batch? learning scenario where the learner is being
evaluated after the sequence is completely revealed, in the online framework the learner is evaluated
at every round. Furthermore, in the batch scenario the data source is typically assumed to be i.i.d.
with an unknown distribution, while in the online framework we relax or eliminate any stochastic
assumptions on the data source. As such, the online learning problem can be phrased as a repeated
two-player game between the learner (player) and the adversary (Nature).
Let F be a class of functions and X some set. The Online Learning Model is defined as
the following T -round interaction between the learner and the adversary: On round t = 1, . . . , T ,
the Learner chooses ft ? F, the Adversary picks xt ? X , and the Learner suffers loss ft (xt ). At
the end of T rounds we define regret as the difference between the cumulative loss of the player
as compared to the cumulative loss of the best fixed comparator. For the given pair (F, X ), the
problem is said to be online learnable if there exists an algorithm for the learner such that regret
grows sublinearly. Learnability is closely related to Hannan consistency [13, 9].
There has been a lot of interest in a particular setting of the online learning model, called online
convex optimization. In this setting, we write xt (ft ) as the loss incurred by the learner, and the
assumption is made that the function xt is convex in its argument. The particular convexity structure
enables the development of optimization-based algorithms for learner?s choices. Learnability and
precise rates of growth of regret have been shown in a number of recent papers (e.g. [33, 25, 1]). The
online learning model also subsumes the prediction setting. In the latter, the learner?s choice of a Yvalued function gt leads to the loss of `(gt (zt ), yt ) according to a fixed loss function ` : Y ? Y 7? R.
The choice of the learner is equivalently written as ft (x) = `(gt (z), y), and xt = (zt , yt ) is the
choice of the adversary. In Section 6 we discuss the prediction setting in more detail.
In the ?batch? learning scenario, data {(xi , yi )}Ti=1 is presented as an i.i.d. draw from a fixed
distribution over some product X ? Y. Learnability results have been extensively studied in the
PAC framework [29] and its agnostic extensions [14, 17]. It is well-known that learnability in
the binary case (that is, Y = {?1, +1}) is completely characterized by finiteness of the VapnikChervonenkis combinatorial dimension of the function class [32, 31]. In the real-valued case, a
number of combinatorial quantities have been proposed: P -dimension [23], V -dimension, as well
as the scale-sensitive versions P? -dimension [17, 5] and V? -dimension [3]. The last two dimensions
1
were shown to be characterizing learnability [3] and uniform convergence of means to expectations
for function classes.
In contrast to the classical learning setting, there has been surprisingly little work on characterizing
learnability for the online learning framework. Littlestone [19] has shown that, in the setting of
prediction of binary outcomes, a certain combinatorial property of the binary-valued function class
characterizes learnability in the realizable case. The result has been extended to the non-realizable
case by Shai Ben-David, D?avid P?al and Shai Shalev-Shwartz [7] who named this combinatorial
quantity the Littlestone?s dimension. In parallel to [7], minimax analysis of online convex optimization yielded new insights into the value of the game, its minimax dual representation, as well as
algorithm-independent upper and lower bounds [1, 27]. In this paper, we build upon these results
and the findings of [7] to develop a theory of online learning.
We show that in the online learning model, a notion which we call Sequential Rademacher complexity allows us to easily prove learnability for a vast array of problems. The role of this complexity
is similar to the role of the Rademacher complexity in statistical learning theory. Next, we extend Littlestone?s dimension to the real-valued case. We show that finiteness of this scale-sensitive
version, which we call the fat-shattering dimension, is necessary and sufficient for learnability in
the prediction setting. Extending the binary-valued result of [7], we introduce a generic algorithm
which plays the role similar to that of empirical risk minimization for i.i.d. data: if the problem
is learnable in the supervised setting, then it is learnable by this algorithm. Along the way we develop analogues of Massart?s finite class lemma, the Dudley integral upper bound on the Sequential
Rademacher complexity, appropriately defined packing and covering numbers, and even an analogue
of the Sauer-Shelah combinatorial lemma. In the full version of this paper, we introduce a generalization of the uniform law of large numbers for non-i.i.d. distributions and show that finiteness of
the fat-shattering dimension implies this convergence.
Many of the results come with more work than their counterparts in statistical learning theory. In
particular, instead of training sets we have to work with trees, making the results somewhat involved.
For this reason, we state our results without proofs, deferring the details to the full version of this
paper. While the spirit of the online theory is that it provides a ?temporal? generalization of the
?batch? learning problem, not all the results from statistical learning theory transfer to our setting.
For instance, two distinct notions of a packing set exist for trees, and these notions can be seen
to coincide in ?batch? learning. The fact that many notions of statistical learning theory can be
extended to the online learning model is indeed remarkable.
2
Preliminaries
By phrasing the online learning model as a repeated game and considering its minimax value, we
naturally arrive at an important object in combinatorial game theory: trees. Unless specified, all
trees considered in this paper are rooted binary trees with equal-depth paths from the root to the
leaves. While it is useful to have the tree picture in mind when reading the paper, it is also necessary
to precisely define trees as mathematical objects. We opt for the following definition. Given some
set Z, a Z-valued tree of depth T is a sequence (z1 , . . . , zT ) of T mappings zi : {?1}i?1 7? Z.
The root of the tree z is the constant function z1 ? Z. Armed with this definition, we can talk about
various operations on trees. For a function f : Z 7? U, f (x) denotes the U-valued tree defined by
the mappings (f ?x1 , . . . , f ?xT ). A path of length T is a sequence = (1 , . . . , T ?1 ) ? {?1}T ?1 .
We shall abuse notation by referring to xi (1 , . . . , i?1 ) by xi (). Clearly xi only depends on the
first i ? 1 elements of .
We denote (ya , . . . , yb ) by ya:b . The set of all functions from X to Y is denoted by Y X , and the
t-fold product X ? . . . ? X is denoted by X t . For any T ? N, [T ] denotes the set {1, . . . , T }.
Whenever the variable in sup (inf) is not quantified, it ranges over the set of all possible values.
3
Value of the Game
Fix the sets F and X and consider the online learning model stated in the introduction. We assume
that F is a separable metric space. Let Q be the set of Borel probability measures on F. Assume that
Q is weakly compact. We consider randomized learners who predict a distribution qt ? Q on every
2
round. Formally, define a learner?s strategy ? as a sequence of mappings ?t : X t?1 ? F t?1 ?
7 Q
for each t ? [T ]. We define the value of the game as
" T
#
T
X
X
ft (xt ) ? inf
VT (F, X ) = inf sup Ef1 ?q1 ? ? ? inf sup EfT ?qT
f (xt )
(1)
q1 ?Q x1 ?X
qT ?Q xT ?X
t=1
f ?F
t=1
where ft has distribution qt . We consider here the adaptive adversary who gets to choose each xt
based on the history of moves f1:t?1 and x1:t?1 .
Note that our assumption that F is a separable metric space implies that Q is tight [28] and
Prokhorov?s theorem states that compactness of Q under weak topology is equivalent to tightness
[28]. Hence we have that Q is compact under weak topology and this is essentially what we need to
apply a modification of Theorem 1 of [1]. Specifically we show the following:
Theorem 1. Let F and X be the sets of moves for the two players, satisfying the necessary conditions for the minimax theorem to hold. Denote by Q and P the sets of probability distributions
(mixed strategies) on F and X , respectively. Then
" T
#
T
X
X
VT (F, X ) = sup Ex1 ?p1 . . . sup ExT ?pT
inf Ext ?pt [ft (xt )] ? inf
f (xt ) .
(2)
p1
pT
t=1
ft ?F
f ?F
t=1
The question of learnability in the online learning model is now reduced to the study of VT (F, X ),
taking Eq. (2) as the starting point. In particular, under our definition, showing that the value grows
sublinearly with T is equivalent to showing learnability.
Definition 1. A class F is said to be online learnable with respect to the given X if
VT (F, X )
lim sup
=0.
T
T ??
The rest of the paper is aimed at understanding the value of the game VT (F, X ) for various function
classes F. Since complexity of F is the focus of the paper, we shall often write VT (F), and the
dependence on X will be implicit. One of the key notions introduced in this paper is the complexity
which we term Sequential Rademacher complexity. A natural generalization of Rademacher complexity [18, 6, 21], the sequential analogue possesses many of the nice properties of its classical
cousin. The properties are proved in Section 7 and then used to show learnability for many of the
examples in Section 8. The first step, however, is to show that Sequential Rademacher complexity
upper bounds the value of the game. This is the subject of the next section.
4
Random Averages
Definition 2. The Sequential Rademacher Complexity of a function class F ? RX is defined as
"
#
T
X
RT (F) = sup E sup
t f (xt ())
x
f ?F t=1
where the outer supremum is taken over all X -valued trees of depth T and = (1 , . . . , T ) is a
sequence of i.i.d. Rademacher random variables.
Theorem 2. The minimax value of a randomized game is bounded as VT (F) ? 2RT (F) .
Theorem 2 relies on a technical lemma, whose proof requires considerably more work than the
classical symmetrization proof [11, 21] due to the non-i.i.d. nature of the sequences. We mention
that under strong assumptions on the space of functions, the Sequential Rademacher and the classical
Rademacher complexities coincide (see [1]). In general, however, the two complexities are very
different. For example, the discrepancy is exhibited by a class of linear threshold functions.
5
Covering Numbers and Combinatorial Parameters
In online learning, the notion characterizing learnability for binary prediction in the realizable case
has been introduced by Littlestone [19] and extended to the non-realizable case of binary prediction by Shai Ben-David, D?avid P?al and Shai Shalev-Shwartz [7]. Next, we define the Littlestone?s
3
dimension [19, 7] and propose its scale-sensitive versions for real-valued function classes. In the
sequel, these combinatorial parameters are shown to control the growth of covering numbers on
trees. In the setting of prediction, the combinatorial parameters are shown to exactly characterize
learnability (see Section 6).
Definition 3 ([19, 7]). An X -valued tree x of depth d is shattered by a function class F ? {?1}X
if for all ? {?1}d , there exists f ? F such that f (xt ()) = t for all t ? [d]. The Littlestone
dimension Ldim(F, X ) is the largest d such that F shatters an X -valued tree of depth d.
Definition 4. An X -valued tree x of depth d is ?-shattered by a function class F ? RX , if there
exists an R-valued tree s of depth d such that
? ? {?1}d , ?f ? F s.t. ?t ? [d], t (f (xt ()) ? st ()) ? ?/2
The tree s is called the witness to shattering. The fat-shattering dimension fat? (F, X ) at scale ? is
the largest d such that F ?-shatters an X -valued tree of depth d.
With these definitions it is easy to see that fat? (F, X ) = Ldim(F, X ) for a binary-valued function
class F ? {0, 1}X for any 0 < ? ? 1. When X and/or F is understood from the context, we will
simply write fat? or fat? (F) instead of fat? (F, X ).
Let us mention that if trees x are defined by constant mappings xt () = xt , the combinatorial parameters coincide with the Vapnik-Chervonenkis dimension and with the scale-sensitive dimension
P? . Therefore, the notions we are studying are a strict ?temporal? generalizations of the VC theory.
As in statistical learning theory, the combinatorial parameters are only useful if they can be shown to
capture that aspect of F which is important for learnability. In particular, a ?size? of a function class
is known to be related to complexity of learning from i.i.d. data., and the classical way to measure
?size? is through a cover or a packing set. We propose the following definitions for online learning.
Definition 5. A set V of R-valued trees of depth T is an ?-cover (with respect to `p -norm) of
F ? RX on a tree x of depth T if
?f ? F, ? ? {?1}T ?v ? V s.t.
T
1X
|vt () ? f (xt ())|p ? ?p
T t=1
The covering number Np (?, F, x) of a function class F on a given tree x is the size of the smallest
cover. Further define Np (?, F, T ) = supx Np (?, F, x), the maximal `p covering number of F over
depth T trees.
In particular, a set V of R-valued trees of depth T is a 0-cover of F ? RX on a tree x of depth T if
for all f ? F and ? {?1}T , there exists v ? V s.t. vt () = f (xt ()). We denote by N (0, F, x)
the size of a smallest 0-cover on x and N (0, F, T ) = supx N (0, F, x). The 0-cover should not be
mistaken for the size |{f (x) : f ? F }| of the projection of F onto the tree x, and the same care
should be taken when dealing with ?-covers.
We would like to comment that while in the i.i.d. setting there is a notion of packing number that
upper and lower bounds covering number, in the sequential counterpart such an analog fails.
5.1
A Combinatorial Upper Bound
We now relate the combinatorial parameters introduced in the previous section to the size of a cover.
In the binary case (k = 1 below), a reader might notice a similarity of Theorem 3 to the classical
results due to Sauer [24], Shelah [26] (also, Perles and Shelah), and Vapnik and Chervonenkis [32].
There are several approaches to proving what is often called the Sauer-Shelah lemma. We opt for
the inductive-style proof (e.g. Alon and Spencer [4]). Dealing with trees, however, requires more
work than in the VC case.
X
Theorem 3. Let F ? {0, . . . , k} be a class of functions with fat1 (F) = d1 , fat2 (F) = d2 . Then
N? (1/2, F, T ) ?
d2
X
T
i=0
i
i
k ? (ekT )
d2
N (0, F, T ) ?
,
d1
X
T
i=0
4
i
k i ? (ekT )
d1
.
Of particular interest is the case k = 1, when fat1 (F) = Ldim(F) . Armed with Theorem 3, we
can reduce the problem of bounding the size of a cover at an ? scale by a discretization trick. For
the classical case of a cover based on a set points, the discretization idea appears in [3, 22]. We now
show that the covering numbers are bounded in terms of the fat-shattering dimension.
Corollary 4. Suppose F is a class of [?1, 1]-valued functions on X . Then for any ? > 0, any
T > 0, and any X -valued tree x of depth T ,
N1 (?, F, x) ? N2 (?, F, x) ? N? (?, F, x) ? (2eT /?)
fat? (F )
When bounding deviations of means from expectations uniformly over the function class, the usual
approach proceeds by a symmetrization argument [12] followed by passing to a cover of the function
class and a union bound (e.g. [21]). Alternatively, a more refined chaining analysis integrates over
covering at different scales (e.g. [30]). By following the same path, we are able to prove a number
of similar results for our setting. Next, we present a bound similar to Massart?s finite class lemma
[20, Lemma 5.2]. This result will be used when integrating over different scales for the cover.
5.2
Finite Class Lemma and the Chaining Method
Lemma 5. For any finite set V of R-valued trees of depth T we have that
"
E max
v?V
T
X
v
u
T
u
X
t vt () ? t2 log(|V |) max max
vt ()2
#
v?V ?{?1}T
t=1
t=1
X
A simple consequence
p of the above lemma is that if F ? [0, 1] is a finite class, then for any given
tree x we obtain a 2T log(|F|) upper bound. If f ? F is associated with an ?expert? (see [9]),
this result combined with Theorem 2 yields a bound given by the expert?s algorithm. In Section 8
we discuss this case in more detail. However, as we show next, Lemma 5 goes well beyond just
finite classes and can be used to get an analog of Dudley entropy bound [10] for the online setting
through a chaining argument.
Definition 6. The Integrated complexity of a function class F ? [?1, 1]X is defined as
DT (F) = inf
?
?
Z
4T ? + 12
1
ff
p
T log N2 (?, F, T ) d? .
?
The basic idea in the proof of the following theorem is the same as in statistical learning: RT (F) is
bounded by controlling the complexity along the chain of coverings. The argument for trees, though,
is more involved than the classical case.
Theorem 6. For any function class F ? [?1, 1]X ,
6
RT (F) ? DT (F)
Supervised Learning
In this section we study the supervised learning problem where player picks a function ft ? RX
at any time t and the adversary provides input target pair (xt , yt ) and the player suffers loss
|ft (xt ) ? yt |. Note that if F ? {?1}X and each yt ? {?1} then the problem boils down to binary
classification problem. As we are interested in prediction, we allow ft to be outside of F. Though
we use the absolute loss in this section, it is easy to see that all the results hold (with modified rates)
for any loss `(f (x), y) which is such that for all f , x and y, ?(`(?
y , y)) ? |?
y ? y| ? ?(`(?
y , y)) where
? and ? are monotonically increasing functions. For instance the squared loss is a classic example.
To formally define the value of the online supervised learning game, fix a set of labels Y ? [?1, 1].
Given F, define the associated loss class, FS = {(x, y) 7? |f (x)?y| : f ? F} . Now, the supervised
game is obtained using the pair (FS , X ? Y) and we accordingly define VTS (F) = VT (FS , X ? Y) .
Binary classification is, of course, a special case when Y = {?1} and F ? {?1}X . In that case,
we simply use VTBinary for VTS .
5
Proposition 7. For the supervised learning game played with a function class F ? [?1, 1]X , for
any T ? 1
n p
o
1
1
? sup ? T min {fat? , T } ? VTS (F)
2
4 2 ?
(
? RT (F) ? DT (F) ? inf
?
? Z
4T ? + 12 T
1
s
?
fat? log
?
2eT
?
)
?
d?
(3)
Theorem 8. For any function class F ? [?1, 1]X , F is online learnable in the supervised setting
if and only if fat? (F) is finite for any ? > 0. Moreover, if the function class is online learnable,
then the value of the supervised game VTS (F), the Sequential Rademacher complexity R(F), and
the Integrated complexity D(F) are within a multiplicative factor of O(log3/2 T ) of each other.
Corollary 9. For the binary classification game played with function class F we have that
p
p
K1 T min {Ldim(F), T } ? VTBinary (F) ? K2 T Ldim(F) log T
for some universal constants K1 , K2 . This recovers the result of [7].
We wish to point out that lower bound of Proposition 7 also holds for ?improper? supervised learning
algorithms, i.e. those simply output a prediction y?t ? Y rather than a function ft ? F. Since a
proper learning strategy can always be used as an improper learning strategy, we trivially have that
if class is online learnable in the supervised setting then it is improperly online learnable. Because
the above mentioned property of lower bound of Proposition 7, we also have the non-trivial reverse
implication: if a class is improperly online learnable in the supervised setting, it is online learnable.
6.1
Generic Algorithm
We shall now present a generic improper learning algorithm for the supervised setting that achieves
a low regret bound whenever the function class is online learnable. For any ? > 0 define an ?discretization of the [?1, 1] interval as B? = {?1 + ?/2, ?1 + 3?/2, . . . , ?1 + (2k + 1)?/2, . . .}
for 0 ? k and (2k + 1)? ? 4. Also for any a ? [?1, 1] define bac? = argmin |r ? a|. For a set of
r?B?
functions V ? F, any r ? B? and x ? X define V (r, x) = {f ? V | f (x) ? (r ? ?/2, r + ?/2]}.
The algorithm proceeds by generating?experts? in a way similar to [7]. Using these experts along
with exponentially weighted experts algorithm we shall provide the generic algorithm for online
supervised learning.
Algorithm 1 Expert (F, ?, 1 ? i1 < . . . < iL ? T, Y1 , . . . , YL )
V1 ? F
for t = 1 to T do
Rt (x) = {r ? B? : fat? (Vt (r, x)) =Pmaxr0 ?B? fat? (Vt (r0 , x))}
For each x ? X , let ft0 (x) = |Rt1(x)| r?Rt (x) r
if t ? {i1 , . . . , iL } then
?x ? X , ft (x) = Yj where j is s.t. t = ij .
Play ft , receive xt , and update Vt+1 = Vt (ft (xt ), xt )
else
Play ft = ft0 , receive xt , and set Vt+1 = Vt
end if
end for
For each L ? fat? (F) and every possible choice of 1 ? i1 < . . . < iL ? T and Y1 , . . . , YL ? B?
we generate an expert. Denote this set of experts as ET . Each expert outputs a function ft ? F
at every round T . Hence each expert e ? ET can be seen as a sequence (e1 , . . . , eT ) of mappings
Pfat? T
L
2T fat?
et : X t?1 7? F. The number of unique experts is |ET | = L=0
L (|B? | ? 1) ?
?
Using an argument similar to [7], for any f ? F there exists e ? ET such that for any t ? [T ],
|f (xt ) ? e(x1:t?1 )(xt )| ? ? .
Theorem 10. For any ? > 0 if we run the exponentially weighted experts algorithm with the set ET
of experts then the expected regret of the algorithm is bounded as
"
E
T
X
t=1
ft (xt ) ? inf
f ?F
T
X
#
f (xt ) ? ?T +
t=1
6
s
?
T fat? log
2T
?
?
Further if F be bounded by 1 then by running an additional experts algorithm over the experts for
discretizations over ?, we can provide regret guarantee of
E
" T
X
t=1
7
ft (xt ) ? inf
f ?F
T
X
#
f (xt ) ? inf
t=1
s
(
?
?T +
T fat? log
?
2T
?
?
?
+
?
? ??)
1
T 3 + 2 log log
?
Structural Results
Being able to bound complexity of a function class by a complexity of a simpler class is of great
utility for proving bounds. In statistical learning theory, such structural results are obtained through
properties of Rademacher averages [21, 6]. In particular, the contraction inequality due to Ledoux
and Talagrand, allows one to pass from a composition of a Lipschitz function with a class to the
function class itself. This wonderful property permits easy convergence proofs for a vast array of
problems. We show that the notion of Sequential Rademacher complexity also enjoys many of the
same properties. In Section 8, the effectiveness of the results is illustrated on a number of examples.
First, we prove the contraction inequality.
Lemma 11. Fix a class F ? RZ and a function ? : R ? Z 7? R. Assume, for all z ? Z, ?(?, z) is
a L-Lipschitz function. Then R(?(F)) ? L ? R(F) where ?(F) = {z 7? ?(f (z), z) : f ? F}.
The next lemma bounds the Sequential Rademacher complexity for the product of classes.
Lemma 12. Let F = F1 ? . . . ? Fk where each Fj ? RX . Alsolet ? : Rk 7? R be L-Lipschitz
Pk
w.r.t. k ? k? norm. Then we have that R(? ? F) ? LO log3/2 (T )
j=1 R(Fj ) .
Corollary 13. For a fixed binary functionb : {?1}k7? {?1} and classes F1 , . . . , Fk of {?1}Pk
valued functions, R(g(F1 , . . . , Fk )) ? O log3/2 (T )
j=1 R(Fj ) .
In the next proposition, we summarize some basic properties of Sequential Rademacher complexity
(see [21, 6] for the results in the i.i.d. setting):
Proposition 14. Sequential Rademacher complexity satisfies the following properties: (i) if F ? G,
then R(F) ? R(G); (ii) R(F) = R(conv(F)); (iii) R(cF) = |c|R(F) for all c ? R; (iv) If
? : R 7? R is L-Lipschitz, then R(?(F)) ? LR(F); (v) For any h, R(F + h) = R(F) where
F + h = {f + h : f ? F}.
8
Examples and Applications
Example: Linear Function Classes Suppose FW is a class consisting of linear functions x 7?
hw, xi where the weight vector w comes from some set W, FW = {x 7? hw, xi : w ? W}. Often,
it is possible to find a strongly convex function ?(w) ? 0 such that ?(w) ? ?max < ? for all
w ? W (for example the function kwk22 on any bounded subset of Rd ).
Theorem 15. Let W be a class of weight vectors such that 0 ? ?(w) ? ?max for all w ? W.
Suppose
p that ? is ?-strongly convex w.r.t. a given norm k ? k. Then, we have, RT (FW ) ?
kX k? 2 ?max T /? , where kX k? = supx?X kxk? , the maximum dual norm of any vector in the
input space.
?
The above result actually allows us to recover the O( T ) regret bounds of online mirror descent
(including Zinkevich?s online gradient descent) obtained in the online convex optimization literature.
There, the set X is set of convex Lipschitz functions on a convex set F. We interpret f (x) as x(f ).
It is easy to bound the value of the convex game by that of the linear game [2], i.e. one in which X
is the set of linear functions. Then we directly appeal to the above theorem to bound the value of
the linear game. The online convex optimization setting includes supervised?
learning using convex
losses and linear predictors and so our theorem also proves existence of O( T ) regret algorithms
in that setting.
Example: Margin Based Regret We prove a general margin based mistake bound for binary
classification. This shows the generality of our framework since we do not require assumptions
7
like convexity to bound the minimax regret. The proof of the following result uses a non-convex
Lipschitz ?ramp? function along with Lemma 11. As far as we know, this is the first general margin
based mistake bound in the online setting for a general function class.
Theorem 16. For any function class F ? RX bounded by B, there exists a randomized player
strategy ? such that for any sequence (x1 , y1 ), . . . , (xT , yT ) ? (X ? {?1})T ,
T
X
(
Eft ??t (x1:t?1 ) [1 {ft (xt )yt < 0}] ? inf
?>0
t=1
inf
f ?F
T
X
1 {f (xt )yt < ?} +
t=1
?
4
RT (F) + T log log
?
?
B
?
?)
Example : Neural Networks and Decision Trees We now consider a k-layer 1-norm neural
network. To this end let function class F1 be given by
(
F1 =
)
x 7?
X
wj1 xj
?
? kwk1 ? B1
(
, and Fi =
x 7?
j
)
X
wji ?
?
(fj (x)) ? ?j fj ? Fi?1 , kwi k1 ? Bi
j
for 2 ? i ? k. The theory we have developed provides us with enough tools to control the sequential
Rademacher complexity of classes like the above that are built using simpler components. The
following result shows that neural networks can be learned online. A similar result, but for statistical
learning, appeared in [6]. Let X ? Rd , and X? be such that ?x ? X , kxk? ? X? .
Theorem 17. Let ? : R 7? [?1, 1] be L-Lipschitz. Then
RT (Fk ) ?
k
Y
!
Lk?1 X?
Bi
p
2T log d.
i=1
We can also prove online learnability of decision trees under appropriate restrictions on their depth
and number of leaves. We skip the formal statement in the interest of space but the proof proceeds
in a fashion similar to the decision tree result in [6]. The structural results enjoyed by the sequential
Rademacher complexity (esp. Corollary 13) are key to making the proof work.
Example: Transductive Learning and Prediction of Individual Sequences Let F ? RX and
b? (?, F) be the classical pointwise (over X ) covering number at scale ?. It is easy to verify that
let N
b? (?, F) for all T . This simple observation can be applied in several situations.
N? (?, F, T ) ? N
First, consider transductive learning, where the set X = {z1 }ni=1 is a finite set. To ensure online
b? (?, F) on ?. An
learnability, it is sufficient to consider an assumption on the dependence of N
b? (?, F) ? (c/?)d for some c which can
obvious example of such a class is a VC-type class with N
depend
on n. Assuming that F ? [0, 1]X , the value of the game is upper bounded by 2DT (F) ?
?
4 dT log c. In particular, forpbinary prediction, using the Sauer-Shelah lemma ensures that the
value of the game is at most 4 dT log(eT ), matching the result of [15] up to a constant 2.
In the context of prediction of individual sequences, Cesa-Bianchi and Lugosi [8] proved upper
bounds in terms of the (classical) Rademacher complexity and the (classical) Dudley integral. The
particular assumption made in [8] is that experts are static. Formally, we define static experts as mappings f : {1, . . . , T } 7? [0, 1], and let F denote a class of such experts. Defining
p X = {1, . . . , T }
puts us in the setting considered earlier with n = T . We immediately obtain 4 dT log(eT ), matchb? ? N which
ing the results on [8, ?
p. 1873]. For the case of a finite number of experts, clearly N
gives the classical O( T log N ) bound [9].
Example: Isotron Recently, Kalai and Sastry [16] introduced a method called Isotron for learning Single Index Models (SIM), which generalize linear and logistic regression, generalized linear
models, and classification by linear threshold functions. A natural open question posed by the authors is whether there is an online variant of Isotron. Before even attempting a quest for such an
algorithm, we can ask a more basic question: is the (Idealized) SIM problem even learnable in the
online framework? We answer the question in positive with the tools we have developed by proving
that the following class (with X a Euclidean ball in Rd and Y = [?1, 1]) is learnable:
H = {f (x, y) = (y ? u(hw, xi))2 | u : [?1, 1] 7? [?1, 1] is non-decreasing 1-Lipschitz , kwk2 ? 1} (4)
where u and w range over the possibilities. Using the machinery we developed, it is not hard to
show
? that the class H is online learnable in the supervised setting. Moreover, VT (H, X ? Y) =
O( T log3/2 T ).
8
References
[1] J. Abernethy, A. Agarwal, P. Bartlett, and A. Rakhlin. A stochastic view of optimal regret through minimax duality. In Proceedings of the 22nd Annual Conference on Learning Theory, 2009.
[2] J. Abernethy, P. L. Bartlett, A. Rakhlin, and A. Tewari. Optimal strategies and minimax lower bounds for
online convex games. In COLT, pages 414?424, 2008.
[3] N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM, 44:615?631, 1997.
[4] N. Alon and J. Spencer. The Probabilistic Method. John Wiley & Sons, 2nd edition, 2000.
[5] P. L. Bartlett, P. M. Long, and R. C. Williamson. Fat-shattering and the learnability of real-valued functions. Journal of Computer and System Sciences, 52(3):434?452, 1996. (special issue on COLT?94).
[6] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: risk bounds and structural
results. J. Mach. Learn. Res., 3:463?482, 2003.
[7] S. Ben-David, D. Pal, and S. Shalev-Shwartz. Agnostic online learning. In COLT, 2009.
[8] N. Cesa-Bianchi and G. Lugosi. On prediction of individual sequences. A. of S., pages 1865?1895, 1999.
[9] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[10] R. M. Dudley. The sizes of compact subsets of Hilbert space and continuity of Gaussian processes.
Journal of Functional Analysis, 1(3):290?330, 1967.
[11] R. M. Dudley. Uniform Central Limit Theorems. Cambridge University Press, 1999.
[12] E. Gin?e and J. Zinn. Some limit theorems for empirical processes. Ann. of Prob., 12(4):929?989, 1984.
[13] J. Hannan. Approximation to Bayes risk in repeated play. Contr. to Theo. of Games, 3:97?139, 1957.
[14] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100(1):78?150, 1992.
[15] S. M. Kakade and A. Kalai. From batch to transductive online learning. In NIPS, 2005.
[16] A. Tauman Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In Proceedings of the 22th Annual Conference on Learning Theory, 2009.
[17] M. J. Kearns and R. E. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of
Computer and System Sciences, 48(3):464?497, 1994.
[18] V. Koltchinskii and D. Panchenko. Rademacher processes and bounding the risk of function learning.
High Dimensional Probability II, 47:443?459, 2000.
[19] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm.
Machine Learning, 2(4):285?318, 04 1988.
[20] P. Massart. Some applications of concentration inequalities to statistics. Annales de la Facult?e des Sciences de Toulouse, IX(2):245?303, 2000.
[21] S. Mendelson. A few notes on statistical learning theory. In MLSS 2002, pages 1?40. 2003.
[22] S. Mendelson and R. Vershynin. Entropy and the combinatorial dimension. Inventiones mathematicae,
152:37?55, 2003.
[23] D. Pollard. Empirical Processes: Theory and Applications, volume 2. Hayward, CA, 1990.
[24] N. Sauer. On the density of families of sets. J. Combinatorial Theory, 13:145?147, 1972.
[25] S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. In NIPS, pages 1265?1272.
MIT Press, Cambridge, MA, 2007.
[26] S. Shelah. A combinatorial problem: Stability and order for models and theories in infinitary languages.
Pac. J. Math, 4:247?261, 1972.
[27] K. Sridharan and A. Tewari. Convex games in banach spaces. In COLT, 2010.
[28] A. W. Van Der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes : With Applications
to Statistics. Springer Series, March 1996.
[29] L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984.
[30] S.A. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
[31] V. N. Vapnik. Estimation of Dependences Based on Empirical Data (Springer Series in Statistics).
Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1982.
[32] V. N. Vapnik and A. Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to
their probabilities. Theory of Probability and its Applications, 16(2):264?280, 1971.
[33] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, pages
928?936, 2003.
9
| 4116 |@word version:5 norm:5 nd:2 open:1 d2:3 contraction:2 prokhorov:1 q1:2 pick:2 mention:2 series:2 chervonenkis:3 discretization:3 written:1 john:1 chicago:1 enables:1 update:1 leaf:2 accordingly:1 lr:1 characterization:1 provides:3 math:1 simpler:2 mathematical:1 along:4 prove:5 introduce:2 indeed:1 expected:1 sublinearly:2 p1:2 decreasing:1 little:1 armed:2 considering:1 increasing:1 conv:1 provided:1 abound:1 notation:1 bounded:8 moreover:2 agnostic:2 hayward:1 what:2 argmin:1 developed:3 finding:1 nj:1 guarantee:1 temporal:2 every:4 ti:1 growth:2 fat:20 exactly:1 k2:2 control:2 before:1 positive:1 understood:1 mistake:2 consequence:1 esp:1 ext:2 mach:1 limit:2 path:3 abuse:1 lugosi:3 might:1 koltchinskii:1 studied:1 quantified:1 range:2 bi:2 unique:1 yj:1 union:1 regret:11 universal:1 discretizations:1 empirical:6 projection:1 matching:1 integrating:1 get:2 onto:1 put:1 risk:4 context:2 isotonic:1 restriction:1 equivalent:2 zinkevich:2 yt:8 go:1 starting:1 convex:16 immediately:1 insight:1 haussler:2 array:2 proving:3 classic:1 notion:9 stability:1 pt:3 play:4 suppose:3 controlling:1 target:1 programming:1 us:1 trick:1 element:1 satisfying:1 ft:20 role:3 capture:1 ensures:1 improper:3 technological:1 mentioned:1 panchenko:1 convexity:2 complexity:30 weakly:1 tight:1 depend:1 upon:1 learner:14 completely:2 packing:4 easily:1 various:3 talk:1 distinct:1 ef1:1 outcome:1 shalev:4 refined:1 outside:1 whose:1 abernethy:2 posed:1 valued:21 relax:1 tightness:1 ramp:1 toulouse:1 statistic:4 vaart:1 transductive:3 itself:1 online:50 sequence:12 ledoux:1 net:1 propose:2 interaction:1 product:3 maximal:1 secaucus:1 convergence:6 extending:1 rademacher:22 generating:1 ben:4 object:2 develop:3 alon:3 ij:1 qt:4 sim:2 eq:1 strong:1 wonderful:1 implies:2 come:2 skip:1 closely:1 attribute:1 stochastic:2 vc:3 require:1 fix:3 generalization:5 f1:6 preliminary:1 opt:2 proposition:5 spencer:2 extension:1 hold:3 considered:2 great:1 mapping:6 predict:1 achieves:1 smallest:2 estimation:2 integrates:1 combinatorial:17 label:1 sensitive:5 symmetrization:2 largest:2 tool:3 weighted:2 minimization:1 mit:1 clearly:2 always:1 gaussian:2 modified:1 rather:1 kalai:3 corollary:4 focus:1 vapnikchervonenkis:1 contrast:2 realizable:4 contr:1 shattered:2 typically:1 eliminate:1 integrated:2 compactness:1 interested:1 i1:3 issue:1 among:2 dual:2 classification:5 denoted:2 colt:4 development:1 special:2 equal:1 shattering:6 icml:1 discrepancy:1 np:3 t2:1 few:1 individual:3 consisting:1 isotron:4 karthik:1 n1:1 interest:3 possibility:1 chain:1 implication:1 integral:2 necessary:3 sauer:5 machinery:1 unless:1 tree:35 iv:1 euclidean:1 littlestone:7 re:1 instance:2 fenchel:1 earlier:1 cover:12 deviation:1 subset:2 uniform:5 predictor:1 learnability:21 characterize:1 pal:1 answer:1 supx:3 considerably:1 chooses:1 referring:1 st:1 combined:1 vershynin:1 randomized:3 density:1 sequel:1 probabilistic:2 yl:2 quickly:1 fat1:2 squared:1 cesa:4 central:1 choose:1 expert:19 style:1 de:4 subsumes:1 includes:1 inc:1 fatshattering:1 depends:1 idealized:1 multiplicative:1 root:2 lot:1 view:1 mls:1 characterizes:1 sup:9 recover:1 bayes:1 parallel:1 shai:4 il:3 ni:1 who:3 ekt:2 yield:1 generalize:1 weak:3 rx:8 history:1 suffers:2 mathematicae:1 whenever:2 definition:11 infinitesimal:1 inventiones:1 frequency:1 involved:2 obvious:1 naturally:1 proof:9 associated:2 recovers:1 static:2 boil:1 proved:2 ask:1 lim:1 hilbert:1 actually:1 appears:1 dt:7 supervised:16 yb:1 evaluated:2 though:2 strongly:2 generality:1 furthermore:1 just:1 implicit:1 talagrand:1 continuity:1 logistic:1 grows:2 usa:1 verify:1 concept:1 counterpart:2 inductive:1 hence:2 illustrated:1 round:6 ex1:1 game:24 covering:11 rooted:1 chaining:3 generalized:2 complete:1 theoretic:1 fj:5 fi:2 recently:1 functional:1 exponentially:2 volume:1 banach:1 extend:1 eft:2 analog:2 interpret:1 kwk2:1 composition:1 cambridge:4 mistaken:1 rd:3 consistency:1 trivially:1 fk:4 enjoyed:1 sastry:2 ldim:5 language:1 phrasing:1 similarity:1 gt:3 recent:1 inf:13 irrelevant:1 reverse:1 scenario:3 certain:1 verlag:1 inequality:3 binary:14 kwk1:1 vt:23 yi:1 der:1 wji:1 seen:2 additional:1 somewhat:1 care:1 r0:1 monotonically:1 ii:2 full:2 hannan:2 ing:1 technical:1 characterized:1 long:1 e1:1 prediction:14 shelah:6 basic:3 regression:2 variant:1 essentially:1 expectation:2 metric:2 agarwal:1 receive:2 interval:2 else:1 source:2 finiteness:3 appropriately:1 rest:1 posse:1 exhibited:1 massart:3 strict:1 subject:1 comment:1 kwk22:1 kwi:1 ascent:1 sridharan:2 spirit:1 effectiveness:1 call:2 structural:4 revealed:1 iii:1 easy:5 enough:1 xj:1 zi:1 pennsylvania:1 topology:2 reduce:1 idea:2 avid:2 texas:1 cousin:1 whether:1 utility:1 bartlett:4 wellner:1 improperly:2 f:3 pollard:1 passing:1 york:1 useful:2 tewari:3 aimed:1 extensively:1 bac:1 reduced:1 generate:1 schapire:1 exist:1 notice:1 discrete:1 write:3 shall:4 key:2 threshold:3 shatters:2 v1:1 vast:2 annales:1 zinn:1 run:1 prob:1 named:1 arrive:1 family:1 reader:1 draw:1 decision:4 bound:26 layer:1 followed:1 played:2 fold:1 yielded:1 annual:2 precisely:1 phrased:1 aspect:1 argument:5 min:2 attempting:1 separable:2 department:2 according:1 ball:1 march:1 son:1 kakade:1 deferring:1 making:2 modification:1 taken:2 discus:2 singer:1 mind:1 know:1 end:4 studying:1 operation:1 permit:1 apply:2 generic:4 appropriate:1 appearing:1 dudley:5 batch:6 existence:1 rz:1 denotes:2 running:1 cf:1 ensure:1 k1:3 build:1 prof:1 classical:13 move:2 question:4 quantity:2 strategy:6 concentration:1 dependence:3 rt:10 usual:1 said:2 gin:1 gradient:2 outer:1 trivial:1 reason:1 assuming:1 length:1 pointwise:1 relationship:1 index:1 equivalently:1 statement:1 relate:1 stated:1 zt:3 proper:1 unknown:1 bianchi:4 upper:8 observation:1 finite:9 descent:2 defining:2 extended:3 witness:1 precise:1 situation:1 y1:3 communication:1 david:4 introduced:4 pair:3 specified:1 connection:1 z1:3 learned:1 nip:2 able:2 adversary:6 proceeds:3 below:1 beyond:1 rt1:1 appeared:1 reading:1 summarize:1 ambuj:1 built:1 max:6 including:1 analogue:4 event:1 natural:2 minimax:8 picture:1 lk:1 wj1:1 faced:1 nice:1 understanding:1 literature:1 relative:1 law:1 loss:12 mixed:1 remarkable:1 incurred:1 sufficient:2 austin:1 lo:1 course:1 surprisingly:1 last:1 free:1 theo:1 enjoys:1 formal:1 allow:1 institute:1 characterizing:3 taking:1 absolute:1 tauman:1 van:2 dimension:19 depth:16 cumulative:2 author:1 made:2 adaptive:1 coincide:3 far:1 log3:4 compact:3 supremum:1 dealing:2 b1:1 assumed:1 xi:7 shwartz:4 alternatively:1 facult:1 nature:2 transfer:1 learn:1 ca:1 williamson:1 ft0:2 pk:2 bounding:4 edition:1 n2:2 repeated:4 x1:6 borel:1 ff:1 fashion:1 wiley:1 fails:1 wish:1 toyota:1 ix:1 hw:3 theorem:21 down:1 rk:1 xt:34 pac:3 showing:2 learnable:15 rakhlin:3 appeal:1 exists:6 mendelson:3 vapnik:4 sequential:15 valiant:1 mirror:1 kx:2 margin:3 entropy:2 simply:3 kxk:2 springer:3 satisfies:1 relies:1 acm:2 ma:1 comparator:1 ann:1 lipschitz:8 fw:3 hard:1 specifically:1 uniformly:1 lemma:15 kearns:1 called:4 geer:1 pas:1 duality:2 ya:3 player:7 la:1 formally:3 quest:1 latter:1 alexander:1 d1:3 |
3,442 | 4,117 | PAC-Bayesian Model Selection
for Reinforcement Learning
Joelle Pineau
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Mahdi Milani Fard
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Abstract
This paper introduces the first set of PAC-Bayesian bounds for the batch reinforcement learning problem in finite state spaces. These bounds hold regardless
of the correctness of the prior distribution. We demonstrate how such bounds can
be used for model-selection in control problems where prior information is available either on the dynamics of the environment, or on the value of actions. Our
empirical results confirm that PAC-Bayesian model-selection is able to leverage
prior distributions when they are informative and, unlike standard Bayesian RL
approaches, ignores them when they are misleading.
1
Introduction
Bayesian methods in machine learning, although elegant and concrete, have often been criticized
not only for their computational cost, but also for their strong assumptions on the correctness of the
prior distribution. There are usually no theoretical guarantees when performing Bayesian inference
with priors that do not admit the correct posterior. Probably Approximately Correct (PAC) learning
techniques, on the other hand, provide distribution-free convergence guarantees with polynomiallybounded sample sizes [1]. These bounds, however, are notoriously loose and impractical. One can
argue that such loose bounds are to be expected, as they reflect the inherent difficulty of the problem
when no assumptions are made on the distribution of the data.
Both PAC and Bayesian methods have been proposed for reinforcement learning (RL) [2, 3, 4, 5,
6, 7, 8], where an agent is learning to interact with an environment to maximize some objective
function. Many of these methods aim to solve the so-called exploration?exploitation problem by
balancing the amount of time spent on gathering information about the dynamics of the environment
and the time spent acting optimally according to the currently built model. PAC methods are much
more conservative than Bayesian methods as they tend to spend more time exploring the system
and collecting information [9]. Bayesian methods, on the other hand, are greedier and only solve
the problem over a limited planning horizon. As a result of this greediness, Bayesian methods can
converge to suboptimal solutions. It has been shown that Bayesian RL is not PAC [9]. We argue here
that a more adaptive method can be PAC and at the same time more data efficient if an informative
prior is taken into account. Such adaptive techniques have been studied within the PAC-Bayesian
literature for supervised learning.
The PAC-Bayesian approach, first introduced by McAllester [10] (extending the work of ShaweTaylor et al. [11]), combines the distribution-free correctness of PAC theorems with the dataefficiency of Bayesian inference. This is achieved by removing the assumption of the correctness of
the prior and, instead, measuring the consistency of the prior over the training data. The empirical
results of model selection algorithms for classification tasks using these bounds are comparable to
some of the most popular learning algorithms such as AdaBoost and Support Vector Machines [12].
PAC-Bayesian bounds have also been linked to margins in classification tasks [13].
1
This paper introduces the first results of the application of PAC-Bayesian techniques to the batch
RL problem. We derive two PAC-Bayesian bounds on the approximation error in the value function
of stochastic policies for reinforcement learning on observable and discrete state spaces. One is a
bound on model-based RL where a prior distribution is given on the space of possible models. The
second one is for the case of model-free RL, where a prior is given on the space of value functions.
In both cases, the bound depends both on an empirical estimate and a measure of distance between
the stochastic policy and the one imposed by the prior distribution. We present empirical results
where model-selection is performed based on these bounds, and show that PAC-Bayesian bounds
follow Bayesian policies when the prior is informative and mimic the PAC policies when the prior
is not consistent with the data. This allows us to adaptively balance between the distribution-free
correctness of PAC and the data-efficiency of Bayesian inference.
2
Background and Notation
In this section, we introduce the notations and definitions used in the paper.
A Markov Decision Process (MDP) M = (S, A, T, R) is defined by a set of states S, a set of
actions A, a transition function T (s, a, s0 ) defined as:
T (s, a, s0 ) = p(st+1 = s0 |st = s, at = a), ?s, s0 ? S, a ? A,
(1)
and a (possibly stochastic) reward function R(s, a) : S ? A ? [Rmin , Rmax ]. Throughout the paper
we assume finite-state, finite-action, discounted-reward MDPs, with the discount factor denoted by
?. A reinforcement learning agent chooses an action and receives a reward. The environment will
then change to a new state according to the transition probabilities.
A policy is a (possibly stochastic) function from states to actions. The value of a state?action
pair
P
(s, a) for policy ?, denoted by Q? (s, a), is the expected discounted sum of rewards ( t ? t rt ) if the
agent acts according to that policy after taking action a in the first step. The value function satisfies
the Bellman equation [14]:
X
Q? (s, a) = R(s, a) + ?
(T (s, a, s0 )Q? (s0 , ?(s0 ))) .
(2)
s0 ?S
The optimal policy is the policy that maximizes the value function. The optimal value of a state?
action pair, denoted by Q? (s, a), satisfies the Bellman optimality equation [14]:
X
? 0 0
Q? (s, a) = R(s, a) + ?
T (s, a, s0 ) max
Q
(s
,
a
)
.
(3)
0
a ?A
s0 ?S
There are many methods developed to find the optimal policy for a given MDP when transition and
reward functions are known. Value iteration [14] is a simple dynamic programming method in which
one iteratively applies the Bellman optimality operator, denoted by B, to an initial guess of the
optimal value function:
X
0
0 0
BQ(s, a) = R(s, a) + ?
T (s, a, s ) max
Q(s , a ) .
(4)
0
a ?A
s0 ?S
For simplicity we write BQ when B is applied to the value of all state?action pairs. Since B is a
contraction with respect to the infinity norm [15] (i.e. kBQ ? BQ0 k? ? ?kQ ? Q0 k? ), the value
iteration algorithm will converge to the fixed point of the Bellman optimality operator, which is the
optimal value function (BQ? = Q? ).
3
Model-Based PAC-Bayesian Bound
In model-based RL, one aims to estimate the transition and reward functions and then act optimally
according to the estimated models. PAC methods use the empirical average for their estimated model
along with frequentist bounds. Bayesian methods use the Bayesian posterior to estimate the model.
This section provides a bound that suggests an adaptive method to choose a stochastic estimate
between these two extremes, which is both data-efficient and has guaranteed performance.
2
Assuming that the reward model is known (we make this assumption throughout this section), one
can build empirical models of the transition dynamics by gathering sample transitions, denoted by
U , and taking the empirical average. Let this empirical average model be T?(s, a, s0 ) = ns,a,s0 /ns,a ,
where ns,a,s0 and ns,a are the number of corresponding transitions and samples. Trivially, ET? = T .
? is defined to be the value function on an MDP with
The empirical value function, denoted by Q,
the empirical transition model. As one observes more and more sample trajectories on the MDP, the
empirical model gets increasingly more accurate, and so will the empirical value function. Different
? are used in many of the
forms of the following lemma, connecting the error rates on T? and Q,
PAC-MDP results [4]:
Lemma 1. There is a constant k ? (1 ? ?)2 /? such that:
?s, a : kT?(s, a, .) ? T (s, a, .)k1 ? k
?
? ? ? Q? k? ? .
?? : kQ
(5)
As a consequence of the above lemma, one can act near-optimally in the part of the MDP for which
we have gathered enough samples to have a good empirical estimate of the transition model. PACMDP methods explicitly [2] or implicitly [3] use that fact to exploit the knowledge on the model
as long as they are in the ?known? part of the state space. The downside of these methods is that
without further assumptions on the model, it will take a large number of sample transitions to get a
good empirical estimate of the transition model.
The Bayesian approach to modeling the transition dynamics, on the other hand, starts with a prior
distribution over the transition probability and then marginalizes this prior over the data to get a
posterior distribution. This is usually done by assuming independent Dirichlet distributions over the
transition probabilities, with some initial count vector ?, and then adding up the observed counts to
this initial vector to get the conjugate posterior [6]. The initial ?-vector encodes the prior knowledge
on the transition probabilities, and larger initial values further bias the empirical observation towards
the initial belief.
If a strong prior is close to the true values, the Bayesian posterior will be more accurate than the
empirical point estimate. However, a strong prior peaked on the wrong values will bias the Bayesian
model away from the correct probabilities. Therefore, the Bayesian posterior might not provide
the optimal estimate of the model parameters. A good posterior distribution might be somewhere
between the empirical point estimate and the Bayesian posterior.
The following theorem is the first PAC-Bayesian bound on the estimation error in the value function
when we build a stochastic policy1 based on some arbitrary posterior distribution Mq .
Theorem 2. Let ?T? 0 be the optimal policy with respect to the MDP with transition model T 0 and
? ?T? 0 ? Q?T? 0 k? . For any prior distribution Mp on the transition model, any posterior Mq ,
?T 0 = k Q
any i.i.d. sampling distribution U, with probability no less than 1 ? ? over the sampling of U ? U:
s
D(Mq kMp ) ? ln ? + |S| ln 2 + ln |S| + ln nmin
?Mq : ET 0 ?Mq ?T 0 ?
,
(6)
(nmin ? 1)k 2 /2
where nmin = mins,a ns,a and D(.k.) is the Kullback?Leibler (KL) divergence.
The above theorem (proved in the Appendix) provides a lower bound on the expectation of the true
value function when the policy is taken to be optimal according to the sampled model from the
posterior:
q
?
?
?T
?T
0
0
?
?
EQ ? EQ ? O D(Mq kMp )/nmin .
(7)
This lower bound suggests a stochastic model-selection method in which one searches in the space
of posteriors to maximize the bound. Notice that there are two elements to the above bound. One
is the PAC part of the bound that suggests the selection of models with high empirical value functions for their optimal policy. There is also a penalty term (or a regularization term) that penalizes
distributions that are far from the prior (the Bayesian side of the bound).
1
This is a more general form of stochastic policy than is usually seen in the RL literature. A complete policy
is sampled from an imposed distribution, correlating the selection of actions on different states.
3
Margin for Deterministic Policies
One could apply Theorem 2 with any choice Mq . Generally, this will result in a bound on the value
of a stochastic policy. However, if the optimal policy is the same for all of the possible samples from
the posterior, then we will get a bound for that particular deterministic policy.
We define the support of policy ?, denoted by T? , to be the set of transition models for which the
optimal policy is ?. Putting all the posterior probability on T? will result in a tighter bound for the
value of the policy ?. The tightest bound occurs when Mq is a scaled version of Mp summing to 1
over T? , that is when we have:
(
Mp (T 0 )
T 0 ? T?
0
M
p (T? )
Mq (T ) =
(8)
0
T0 ?
/ T?
In that case, the KL divergence is D(Mq kMp ) = ? ln Mp (T? ), and the bound will be:
q
EQ?T? 0 ? EQ? ?T? 0 ? O? ? ln Mp (T? )/nmin .
(9)
Intuitively, we will get tighter bounds for policies that have larger empirical values and higher prior
probabilities supporting them.
Finding Mp (T? ) might not be computationally tractable. Therefore, we define a notion of margin
for transition functions and policies and use it to get tractable bounds. The margin of a transition
function T 0 , denoted by ?T 0 , is the maximum distance we can move away from T 0 such that the
optimal policy does not change:
kT 00 ? T 0 k1 ? ?T 0 ? ?T? 00 = ?T? 0 .
(10)
The margin defines a hypercube around T 0 for which the optimal policy does not change. In cases
where the support set of a policy is difficult to find, one can use this hypercube to get a reasonable
bound for the true value function of the corresponding policy. In that case, we would define the
posterior to be the scaled prior defined only on the margin hypercube. The idea behind this method
is similar to that of the Luckiness framework [11] and large-margin classifiers [16, 13]. This shows
that the idea of maximizing margins can be applied to control problems as well as classification and
regression tasks.
To find the margin of any given T 0 , if we know the value of the second best policy, we can calculate
its regret according to T 0 (it will be the smallest regret ?min ). Using Lemma 1, we can conclude
that if kT 00 ? T 0 k1 ? k?min /2, then the value of the best and second best policies can change by at
most ?min /2, and thus the optimal policy will not change. Therefore, ?T 0 ? k?min /2. One can then
define the posterior on the transitions inside the margin to get a bound for the value function.
4
Model-Free PAC-Bayes Bound
In this section we introduce a PAC-Bayesian bound for model-free reinforcement learning on discrete state spaces. This time we assume that we are given a prior distribution on the space of value
functions, rather than on transition models. This prior encodes an initial belief about the optimal
value function for a given RL domain. This could be useful, for example, in the context of transfer
learning, where one has learned a value function in one environment and then uses that as the prior
belief on a similar domain.
We start by defining the TD error of a given value function Q to be kQ ? BQk? . In most cases, we
do not have access to the Bellman optimality operator. When we only have access to a sample set U
? to be:
collected on the RL domain, we can define the empirical Bellman optimality operator B
X
1
?
BQ(s,
a) =
r + ? max
Q(s0 , a0 ) ,
(11)
a0
ns,a
0
(s,a,s ,r)?U
? = BQ. We further make an assumption that all the BQ values we could observe
Note that E[BQ]
are bounded in the range [cmin , cmax ], with c = cmax ? cmin . Using this assumption, one can use
Hoeffding?s inequality to bound the difference between the empirical and true Bellman operators:
2
?
Pr{|BQ(s,
a) ? BQ(s, a)| > } ? e?2ns,a
4
/c2
.
(12)
When the true Bellman operator is not known, one makes use of the empirical TD error, similarly
? ? . Q-learning [14] and its derivations with function approximation [17],
defined to be kQ ? BQk
and also batch methods such as LSTD [18], often aim to minimize the empirical (projected) TD
error. We argue that it might be better to choose a function that is not a fixed point of the empirical
Bellman operator. Instead, we aim to minimize the upper bound on the approximation error (which
might be referred to as loss) of the Q function, as compared to the true optimal value.
The following theorem (proved in the Appendix) is the first PAC-Bayesian bound for model-free
batch RL on discrete state spaces:
?
BQk?
Theorem 3. Let ?Q = kQ ? Q? k? ? kQ?1??
. For all prior distributions Jp and posteriors Jq
over the space of value functions, with probability no less than 1 ? ? over the sampling of U ? U:
s
D(Jq kJp ) ? ln ? + ln |S| + ln |A| + ln nmin
?Jq : EQ?Jq ?Q ?
.
(13)
2(nmin ? 1)(1 ? ?)2 /c2
This time we have an upper bound on the expected approximation error:
q
? ?
?
EkQ ? Q? k? ? EkQ1??BQk
+O
D(Jq kJp )/nmin .
?
(14)
This suggests a model-selection method in which one would search for a posterior Jq to minimize the
above bound. The PAC side of the bound guides this model-selection method to look for posteriors
with smaller empirical TD error. The Bayesian part, on the other hand, penalizes the selection of
posteriors that are far from the prior distribution.
One can use general forms of priors that would impose smoothness or sparsity for this modelselection technique. In that sense, this method would act as a regularization technique that penalizes
complex and irregular functions. The idea of regularization in RL with function approximation is
not new to this work [19]. This bound, however, is more general, as it could incorporate not only
smoothness constraints, but also other forms of prior knowledge into the learning process.
5
Empirical Results
To illustrate the model-selection techniques based on the bounds in the paper, we consider one
model-based RL domain and one model-free problem. The model-based domain is a chain model
in which states are ordered by their index. The last state has a reward of 1 and all other states have
reward 0. There are two types of actions. One is a stochastic ?forward? operation which moves us to
the next state in the chain with probability 0.5 and otherwise makes a random transition. The second
type is a stochastic ?reset? which moves the system to the first state in the chain with probability 0.5
and makes a random transition otherwise. In this domain, we have at each state two actions that do
stochastic reset and one action that is a stochastic forward. There are 10 states and ? = 0.9.
When there are only a few number of sample transitions for each state?action pair, there is a high
chance that the frequentist estimate confuses a reset action with a forward. Therefore, we expect a
good model-based prior to be useful in this case. We use independent Dirichlets as our prior. We
experiment with priors for which the Dirichlet ?-vector sums up to 10. We define our good prior to
have ?-vectors proportional to the true transition probabilities. A misleading prior is one for which
the vector is proportional to a transition model when the actions are switched between forward and
reset. A weighted sum between the good and bad priors creates a range of priors that gradually
change from being informative to misleading.
We compare the expected regret of three different methods. The empirical method uses the optimal
policy with respect to the empirical models. The Bayesian method samples a transition model from
the Bayesian Dirichlet posteriors (when the observed counts are added to the prior ?-vectors) and
then uses the optimal policy with respect to the sampled model. The PAC-Bayesian method uses
counts + ??prior as the ?-vector of the posterior and finds the value of ? ? [0, 1], using linear
search within values with distance 0.1, that maximizes the lower bound of Theorem 2 (with a more
optimistic value for k and ? = 0.05). It then samples from that distribution and uses the optimal
policy with respect to the sampled model. The running time for a single run is a few seconds.
5
Figure 1 (left) shows the comparison between the maximum regret in these methods for different
sample sizes when the prior is informative. This is averaged over 50 runs for the Bayesian and PACBayesian methods and 10000 runs for the empirical method. The number of sampled transitions is
the same for all state?action pairs. As expected, the Bayesian method outperforms the empirical
one for small sample sizes. We can see that the PAC-Bayesian method is closely following the
Bayesian one in this case. With a misleading prior, however, as we can see in Figure 1 (center),
the empirical method outperforms the Bayesian one. This time, the regret rate of the PAC-Bayesian
method follows that of the empirical method. Figure 1 (right) shows how the PAC-Bayesian method
switches between following the empirical estimate and the Bayesian posterior as the prior gradually
changes from being misleading to informative (four sample transitions per state action pair). This
shows that the bound of Theorem 2 is helpful as a model selection technique.
0.4
0.16
Empirical
Empirical
0.35
Bayesian
Bayesian
0.3
PAC?Bayesian
regret
0.12
0.1
0.3
PAC?Bayesian
0.25
regret
0.14
regret
0.4
Empirical
0.35
Bayesian
0.2
0.15
PAC?Bayesian
0.25
0.2
0.15
0.08
0.1
0.1
0.05
0.05
0.06
0.04
5
10
15
sample size for each state?action pair
5
10
15
sample size for each state?action pair
20
20
0
0.2
0.4
0.6
0.8
weight on the good prior
1
Figure 1: Average regrets of different methods. Error bars are 1 standard deviation of the mean.
The next experiment is to test the model-free bound of Theorem 3. The domain is a ?puddle world?.
An agent moves around in a grid world of size 5?9 containing puddles with reward ?1, an absorbing
goal state with reward +1, and reward 0 for the remaining states. There are stochastic actions along
each of the four cardinal directions that move in the correct direction with probability 0.7 and move
in a random direction otherwise. If the agent moves towards the boundary then it stays in its current
position.
G
G
G
Figure 2: Maps of puddle world RL domain. Shaded boxes are puddles.
We first learn the true value function of a known prior map of the world (Figure 2, left). We then use
that value function as the prior for our model-selection technique on two other environments. One of
them is a similar environment where the shape of the puddle is slightly changed (Figure 2, center).
We expect the prior to be informative and useful in this case. The other environment is, however,
largely different from the first map (Figure 2, right). We thus expect the prior to be misleading.
Table 1: Performance of different model-selection methods.
Similar Map
Different Map
Empirical Regret
Bayesian Regret
PAC-Bayesian Regret
Average ?
0.21 ? 0.03
0.19 ? 0.03
0.10 ? 0.01
1.16 ? 0.09
0.12 ? 0.01
0.22 ? 0.04
0.58 ? 0.01
0.03 ? 0.03
We start with independent Gaussians (one for each state?action pair) as the prior, with the initial
map?s Q-values for the mean ?0 , and ?02 = 0.01 for the variance. The posterior is chosen to be the
.
?1
?
nQ(.,.)
?
n
?
n
0
product of Gaussians with mean ??
+
+
and
variance
+
, where
?
?2
?
?2
?
?2
?2
?2
?2
0
0
0
?
? 2 is the empirical variance. We sample from this posterior and act according to its greedy policy.
For ? = 1, this is the Bayesian posterior for the mean of a Gaussian with known variance. For
? = 0, the prior is completely ignored. We will, however, find the ? ? [0, 1] that minimizes the
PAC-Bayesian bound of Theorem 3 (with an optimistic choice of c and ? = 0.05) and compare it
with the performance of the empirical policy and a semi-Bayesian policy that acts according to a
sampled value from the Bayesian posterior.
Table 1 shows the average over 100 runs of the maximum regret for these methods and the average
of the selected ?, with equal sample size of 20 per state?action pair. Again, it can be seen that the
PAC-Bayesian method makes use of the prior (with higher values of ?) when the prior is informative,
and otherwise follows the empirical estimate (smaller values of ?). It adaptively balances the usage
of the prior based on its consistency over the observed data.
6
6
Discussion
This paper introduces the first set of PAC-Bayesian bounds for the batch RL problem in finite state
spaces. We demonstrate how such bounds can be used for both model-based and model-free RL
methods. Our empirical results show that PAC-Bayesian model-selection uses prior distributions
when they are informative and useful, and ignores them when they are misleading.
For the model-based bound, we expect the running time of searching in the space of parametrized
posteriors to increase rapidly with the size of the state space. A more scalable version would sample
models around the posteriors, solve each model, and then use importance sampling to estimate the
value of the bound for each possible posterior. This problem does not exist with the model-free
approach, as we do not need to solve the MDP for each sampled model.
A natural extension to this work would be on domains with continuous state spaces, where one would
use different forms of function approximation for the value function. There is also the possibility
of future work in applications of PAC-Bayesian theorems in online reinforcement learning, where
one targets the exploration?exploitation problem. Online PAC RL with Bayesian priors has recently
been addressed with the BOSS algorithm [20]. PAC-Bayesian bounds could help derive similar
model-free algorithms with theoretical guarantees.
Acknowledgements: Funding for this work was provided by the National Institutes of Health (grant
R21 DA019800) and the NSERC Discovery Grant program.
Appendix
The following lemma, due to McAllester [21], forms the basis of the proofs for both bounds:
Pn
Lemma 4. For ? > 0, K > 0, and Q, P, ? ? Rn satisfying Pi , Qi , ?i ? 0 and i=1 Qi = 1:
n
X
2
Pi e??i ? K
n
X
?
i=1
Qi ?i ?
p
(D(QkP) + ln K)/?.
(15)
i=1
Note that even when we have arbitrary probability measures Q and P on a continuous space of
??s, it might still be possible to define a sequence of vectors Q(1) , Q(2) , . . . , P (1) , P (2) , . . . and
?(1) , ?(2) , . . . such that Q(n) , P (n) and ?(n) satisfy the condition of the lemma and
EQ ? = n??
lim
n
X
(n)
(n)
Qi ?i ,
D(QkP) = lim
n??
i=1
n
X
(n)
(n)
Qi
ln
i=1
Qi
(n)
Pi
.
(16)
We will then take the limit of the conclusion of the lemma to get a bound for the continuous case [21].
Proof of Theorem 2 (Model-Based Bound)
? ?T? 0 ? Q?T? 0 k? . With probability no less than 1 ? ? over the sampling:
Lemma 5. Let ?T 0 = kQ
ET ?M [e
0
p
2 2
1
2 (nmin ?1)k ?T 0
]?
|S|2|S| nmin
.
?
(17)
Before proving Lemma 5, note that Lemma 5 and Lemma 4 together imply Therorem 2. We only
need to apply the method described for arbitrary probability measures. To prove Lemma 5, it suffices
to prove the following, swap the expectations and apply Markov?s inequality:
ET ?M EU ?U [e
0
P
2 2
1
2 (nmin ?1)k ?T 0
] ? |S|2|S| nmin .
(18)
Therefore, we only need to show that for any choice of T 0 , EU ?U [e
bound. Let as = ?T? 0 (s). We have:
X
Pr{?T 0 ? } ?
Pr{kT?(s, as , .) ? T (s, as , .)k1 > k}
2 2
1
2 (nmin ?1)k ?T 0
] follows the
(19)
s
?
X
1
2|S| e? 2 ns,as (k)
s
7
2
1
2
? |S|2|S| e? 2 nmin (k) .
(20)
The first line is by Lemma 1. The second line is a concentration inequality for multinomials [22].
2 2
2
1
1
We choose to maximize EU ?U [e 2 (nmin ?1)k ?T 0 ], subject to Pr{?T 0 ? } ? |S|2|S| e? 2 nmin (k) .
The maximum occurs when the inequality is tight and the p.d.f. for ?T 0 is:
1
f (?) = |S|2|S| k 2 nmin ?e? 2 nmin k
2
?2
.
(21)
f (?)d?
(22)
We thus get:
EU ?U [e
2 2
1
2 (nmin ?1)k ?T 0
?
Z
1
e 2 (nmin ?1)k
] ?
2
?2
0
?
Z
1
|S|2|S| k 2 nmin ?e? 2 k
=
2
?2
d? ? |S|2|S| nmin .
(23)
0
This concludes the proof of Lemma 5 and consequently Theorem 2.
Proof of Theorem 3 (Model-Free Bound)
Since B is a contraction with respect to the infinity norm and Q? is its fixed point, we have:
kQ ? Q? k?
= kQ ? BQ + BQ ? BQ? k? ? kQ ? BQk? + kBQ ? BQ? k?
? kQ ? BQk? + ?kQ ? Q? k?
And thus kQ ? Q? k? ?
1
1?? kQ
(24)
(25)
? BQk? .
Lemma 6. Let ?Q = max(0, kQ ? Q? k? ?
EQ?J [e2(n
?
kQ?BQk
?
).
1??
2 2
2
min ?1)(1??) ?Q /c
p
]?
With probability no less than 1 ? ?:
|S||A|nmin
.
?
(26)
Similar to the previous section, Lemma 6 and Lemma 4 together imply Theorem 3.
To prove Lemma 6, similar to the previous proof, we only need to show that for any choice of Q,
2 2
2
min ?1)(1??) ?Q /c
] follows the bound. We have that:
n
o
? ? /(1 ? ?)
Pr{?Q ? } = Pr kQ ? Q? k? ? + kQ ? BQk
(27)
n
o
? ? /(1 ? ?)
? Pr kQ ? BQk? ? (1 ? ?) + kQ ? BQk
(28)
n
o
X
? ?
?
Pr |Q(s, a) ? BQ(s, a)| ? (1 ? ?) + kQ ? BQk
(29)
EU ?U [e2(n
s,a
?
X
?
X
n
o
?
?
? ? (30)
Pr |Q(s, a) ? BQ(s,
a)| + |BQ(s,
a) ? BQ(s, a)| ? (1 ? ?) + kQ ? BQk
s,a
n
o
?
Pr |BQ(s,
a) ? BQ(s, a)| ? (1 ? ?)
(31)
s,a
?
X
2 2
e?2ns,a (1??)
/c2
? |S||A|e?2nmin (1??)
2 2
/c2
(32)
s,a
Eqn (28) follows from the derivations at the beginning of this section. Eqn (29) is by the union
bound. Eqn (31) is by the definition of infinity norm. Last derivation is by Hoeffding inequality of
Equation (12). Now again, similar to the model-based case, when the inequality is tight the p.d.f. is:
f (?) = 4|S||A|nmin (1 ? ?)2 c?2 ?e?2nmin (1??)
2
?2 /c2
.
We thus get:
EU ?U [e
2(nmin ?1)(1??)2 ?2Q /c2
Z
?
] ?
Z0 ?
=
e2(nmin ?1)(1??)
2
?2 /c2
4|S||A|nmin (1 ? ?)2 c?2 ?e?2(1??)
0
?
f (?)d?
|S||A|nmin .
This concludes the proof of Lemma 6 and consequently Theorem 3.
8
2
?2 /c2
d?
References
[1] L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134?1142, 1984.
[2] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning,
49(2-3):209?232, 2002.
[3] R. I. Brafman and M. Tennenholtz. R-max ? A general polynomial time algorithm for near-optimal
reinforcement learning. The Journal of Machine Learning Research, 3:213?231, 2003.
[4] A. L. Strehl and M. L. Littman. A theoretical analysis of model-based interval estimation. In Proceedings
of the 22nd International Conference on Machine Learning, pages 856?863, 2005.
[5] S. M. Kakade. On the sample complexity of reinforcement learning. PhD thesis, University College
London, 2003.
[6] M. O. G. Duff. Optimal learning: Computational procedures for Bayes-adaptive Markov decision processes. PhD thesis, University of Massachusetts Amherst, 2002.
[7] M. J. A. Strens. A Bayesian Framework for Reinforcement Learning. In Proceedings of the 17th International Conference on Machine Learning, pages 943?950, 2000.
[8] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line reward
optimization. In Proceedings of the 22nd International Conference on Machine Learning, page 963,
2005.
[9] J. Z. Kolter and A. Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th
International Conference on Machine Learning, pages 513?520, 2009.
[10] D. A. McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355?363, 1999.
[11] J. Shawe-Taylor and R. C. Williamson. A PAC analysis of a Bayesian estimator. In Proceedings of the
10th Annual Conference on Computational Learning Theory, pages 2?9, 1997.
[12] P. Germain, A. Lacasse, F. Laviolette, and M. Marchand. PAC-Bayesian learning of linear classifiers. In
Proceedings of the 26th International Conference on Machine Learning, pages 353?360, 2009.
[13] J. Langford and J. Shawe-Taylor. PAC-Bayes and margins. In Proceedings of Advances in Neural Information Processing Systems, pages 439?446, 2002.
[14] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA,
1998.
[15] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming (Optimization and Neural Computation
Series, 3). Athena Scientific, 1996.
[16] R. Herbrich and T. Graepel. A PAC-Bayesian margin bound for linear classifiers. IEEE Transactions on
Information Theory, 48(12):3140?3150, 2002.
[17] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. Proceedings of Advances in Neural Information Processing Systems,
12:1057?1063, 2000.
[18] J. A. Boyan. Technical update: Least-squares temporal difference learning.
49(2):233?246, 2002.
Machine Learning,
[19] A. Farahmand, M. Ghavamzadeh, C. Szepesv?ari, and S. Mannor. Regularized fitted Q-iteration: Application to planning. Recent Advances in Reinforcement Learning, pages 55?68, 2008.
[20] J. Asmuth, L. Li, M. L. Littman, A. Nouri, and D. Wingate. A Bayesian sampling approach to exploration
in reinforcement learning. The 25th Conference on Uncertainty in Artificial Intelligence, 2009.
[21] D. A. McAllester. PAC-Bayesian model averaging. In Proceedings of the 12th Annual Conference on
Computational Learning Theory, pages 164?170, 1999.
[22] T. Weissman, E. Ordentlich, G. Seroussi, S. Verdu, and M. J. Weinberger. Inequalities for the L1 deviation
of the empirical distribution. Technical report, Information Theory Research Group, HP Laboratories,
2003.
9
| 4117 |@word exploitation:2 version:2 polynomial:3 norm:3 nd:2 contraction:2 initial:8 series:1 mmilan1:1 outperforms:2 current:1 shawetaylor:1 informative:9 shape:1 update:1 greedy:1 selected:1 guess:1 nq:1 intelligence:1 beginning:1 provides:2 mannor:1 herbrich:1 along:2 c2:8 farahmand:1 prove:3 combine:1 inside:1 introduce:2 expected:5 planning:2 bellman:9 discounted:2 td:4 provided:1 notation:2 bounded:1 maximizes:2 rmax:1 minimizes:1 r21:1 developed:1 finding:1 impractical:1 guarantee:3 temporal:1 collecting:1 act:6 wrong:1 scaled:2 classifier:3 control:2 grant:2 bertsekas:1 before:1 limit:1 consequence:1 sutton:2 approximately:1 might:6 studied:1 verdu:1 suggests:4 shaded:1 limited:1 range:2 averaged:1 union:1 regret:13 procedure:1 empirical:42 fard:1 get:12 close:1 selection:16 operator:7 context:1 greediness:1 map:6 imposed:2 deterministic:2 center:2 maximizing:1 regardless:1 simplicity:1 estimator:1 mq:10 proving:1 searching:1 notion:1 mcgill:4 target:1 qkp:2 programming:2 us:6 element:1 satisfying:1 observed:3 modelselection:1 wang:1 wingate:1 calculate:1 eu:6 observes:1 environment:8 complexity:1 reward:13 littman:2 dynamic:6 ghavamzadeh:1 singh:2 tight:2 creates:1 efficiency:1 completely:1 basis:1 swap:1 derivation:3 london:1 artificial:1 spend:1 solve:4 larger:2 otherwise:4 online:2 sequence:1 product:1 milani:1 reset:4 rapidly:1 convergence:1 extending:1 spent:2 derive:2 illustrate:1 montreal:2 help:1 seroussi:1 school:2 eq:7 strong:3 c:2 direction:3 closely:1 correct:4 stochastic:14 exploration:4 mcallester:5 cmin:2 suffices:1 tighter:2 exploring:1 extension:1 hold:1 around:3 smallest:1 estimation:2 currently:1 pacbayesian:1 correctness:5 weighted:1 bqk:13 mit:1 gaussian:1 aim:4 rather:1 pn:1 barto:1 lizotte:1 sense:1 bos:1 helpful:1 inference:3 a0:2 jq:6 classification:3 denoted:8 equal:1 ng:1 sampling:7 look:1 peaked:1 mimic:1 future:1 report:1 inherent:1 few:2 cardinal:1 divergence:2 national:1 policy1:1 possibility:1 introduces:3 extreme:1 behind:1 therorem:1 chain:3 accurate:2 kt:4 bq:19 taylor:2 penalizes:3 theoretical:3 fitted:1 criticized:1 modeling:1 downside:1 da019800:1 measuring:1 cost:1 deviation:2 kq:22 optimally:3 chooses:1 adaptively:2 st:2 international:5 amherst:1 stay:1 connecting:1 together:2 concrete:1 dirichlets:1 thesis:2 again:2 reflect:1 containing:1 choose:3 possibly:2 marginalizes:1 hoeffding:2 admit:1 li:1 account:1 satisfy:1 kolter:1 explicitly:1 mp:6 depends:1 performed:1 optimistic:2 linked:1 start:3 bayes:3 minimize:3 square:1 variance:4 largely:1 gathered:1 bayesian:71 trajectory:1 notoriously:1 definition:2 e2:3 proof:6 sampled:7 proved:2 popular:1 massachusetts:1 knowledge:3 lim:2 graepel:1 higher:2 asmuth:1 supervised:1 follow:1 adaboost:1 done:1 box:1 nmin:30 langford:1 hand:4 receives:1 eqn:3 defines:1 pineau:1 scientific:1 mdp:8 usage:1 true:8 regularization:3 q0:1 iteratively:1 leibler:1 laboratory:1 bowling:1 strens:1 complete:1 demonstrate:2 l1:1 nouri:1 recently:1 funding:1 ari:1 absorbing:1 multinomial:1 rl:17 jp:1 cambridge:1 smoothness:2 consistency:2 trivially:1 similarly:1 hp:1 grid:1 shawe:2 access:2 posterior:30 recent:1 commun:1 inequality:7 joelle:1 seen:2 impose:1 converge:2 maximize:3 semi:1 technical:2 long:1 weissman:1 qi:6 scalable:1 neuro:1 regression:1 expectation:2 kmp:3 iteration:3 achieved:1 irregular:1 background:1 szepesv:1 addressed:1 interval:1 unlike:1 probably:1 subject:1 tend:1 elegant:1 near:4 leverage:1 confuses:1 enough:1 switch:1 suboptimal:1 idea:3 t0:1 penalty:1 pacmdp:1 action:23 ignored:1 generally:1 useful:4 amount:1 discount:1 exist:1 notice:1 estimated:2 per:2 discrete:3 write:1 group:1 putting:1 four:2 sum:3 run:4 uncertainty:1 throughout:2 reasonable:1 decision:2 appendix:3 comparable:1 bound:58 guaranteed:1 marchand:1 annual:2 infinity:3 rmin:1 constraint:1 encodes:2 optimality:5 min:7 performing:1 according:8 conjugate:1 smaller:2 slightly:1 increasingly:1 kakade:1 intuitively:1 gradually:2 pr:10 gathering:2 taken:2 ln:12 equation:3 computationally:1 loose:2 count:4 know:1 tractable:2 available:1 tightest:1 operation:1 gaussians:2 apply:3 observe:1 away:2 frequentist:2 batch:5 weinberger:1 dirichlet:3 running:2 remaining:1 cmax:2 laviolette:1 somewhere:1 exploit:1 k1:4 build:2 hypercube:3 objective:1 move:7 added:1 occurs:2 concentration:1 rt:1 gradient:1 distance:3 parametrized:1 athena:1 argue:3 collected:1 assuming:2 index:1 balance:2 difficult:1 policy:38 upper:2 observation:1 markov:3 finite:4 lacasse:1 supporting:1 defining:1 rn:1 mansour:1 duff:1 arbitrary:3 canada:2 introduced:1 pair:10 germain:1 kl:2 learned:1 able:1 bar:1 tennenholtz:1 usually:3 sparsity:1 program:1 built:1 max:5 belief:3 difficulty:1 natural:1 boyan:1 regularized:1 jpineau:1 misleading:7 mdps:1 imply:2 concludes:2 health:1 prior:53 literature:2 acknowledgement:1 discovery:1 loss:1 expect:4 proportional:2 switched:1 agent:5 consistent:1 s0:15 pi:3 balancing:1 strehl:1 changed:1 brafman:1 last:2 free:13 tsitsiklis:1 bias:2 side:2 guide:1 institute:1 taking:2 sparse:1 boundary:1 transition:30 world:4 ordentlich:1 ignores:2 forward:4 made:1 reinforcement:15 adaptive:4 projected:1 far:2 transaction:1 observable:1 implicitly:1 kullback:1 confirm:1 correlating:1 summing:1 conclude:1 search:3 continuous:3 table:2 learn:1 transfer:1 ca:2 schuurmans:1 interact:1 williamson:1 complex:1 domain:9 referred:1 n:9 position:1 mahdi:1 theorem:18 removing:1 luckiness:1 bad:1 z0:1 pac:47 learnable:1 adding:1 valiant:1 importance:1 phd:2 horizon:1 margin:12 ordered:1 nserc:1 applies:1 lstd:1 satisfies:2 chance:1 acm:1 ma:1 goal:1 consequently:2 towards:2 change:7 acting:1 averaging:1 lemma:20 conservative:1 called:1 kearns:1 puddle:5 college:1 support:3 incorporate:1 |
3,443 | 4,118 | Occlusion Detection and Motion Estimation
with Convex Optimization
Alper Ayvaci,
Michalis Raptis,
Stefano Soatto
University of California, Los Angeles
{ayvaci, mraptis, soatto}@cs.ucla.edu
Abstract
We tackle the problem of simultaneously detecting occlusions and estimating optical flow. We show that, under standard assumptions of Lambertian reflection
and static illumination, the task can be posed as a convex minimization problem.
Therefore, the solution, computed using efficient algorithms, is guaranteed to be
globally optimal, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets,
expanded to enable evaluation of occlusion detection performance.
1
Introduction
Optical flow refers to the deformation of the domain of an image that results from ego- or scene
motion. It is, in general, different from the motion field, that is the projection onto the image plane
of the spatial velocity of the scene [28], unless three conditions are satisfied: (a) Lambertian reflection, (b) constant illumination, and (c) constant visibility properties of the scene. Most surfaces
with benign reflectance properties (diffuse/specular) can be approximated as Lambertian almost everywhere under sparse illuminants (e.g., the sun). In any case, widespread violation of Lambertian
reflection does not enable correspondence [23], so we will embrace (a) as customary. Similarly, (b)
constant illumination is a reasonable assumption for ego-motion (the scene is not moving relative to
the light source), and even for objects moving (slowly) relative to the light source.1 Assumption (c)
is the most critical, as it is needed for the motion field to be defined.2 It is often taken for granted in
the optical flow literature, because in the limit where two images are sampled infinitesimally close in
time, there are no occluded regions, and one can focus solely on motion discontinuities. Thus, most
variational motion estimation approaches provide an estimate of a dense flow field at each location
on the image domain, including occluded regions. Alas, in occluded regions, the problem is not that
optical flow is discontinuous, or forward-backward inconsistent; it is simply not defined. Motion in
occluded regions can be hallucinated; However, whatever motion is assigned to an occluded region
cannot be validated from the data. In defense of these methods, it can be argued that, even without
taking the limit, for small parallax (slow-enough motion, or far-enough objects, or fast-enough temporal sampling) occluded areas are small. However, small does not mean unimportant, as occlusions
are critical to perception [8] and a key for developing representations for recognition [22].
For this reason, we focus on issues of visibility in optical flow computation. We show that forgoing
assumption (c) and explicitly representing occlusions is not only conceptually correct, but also algorithmically advantageous, for the resulting optimization problem can be shown to become convex
once occlusions are explicitly modeled. Therefore, one can guarantee convergence to a globally
1
Assumption (b) is also made for convenience, as modeling illumination changes would require modeling
reflectance, which significantly complicates the picture.
2
If the domain of an image portrays a portion of the scene that is not visible in another image, the two cannot
be put into correspondence.
1
optimal solution regardless of initial conditions (sect. 2). We adapt Nesterov?s efficient optimization
scheme to our problem (sect. 3), and test the resulting algorithm on benchmark datasets (sect. 4),
including evaluation of occlusion detection (sect. 1.2).
1.1
Related Work
The most common approach to handling occlusions in the optical flow literature is to define them as
regions where forward and backwards motion estimates are inconsistent [19, 1]. Most approaches
return estimates of motion in the occluded regions, where they cannot be invalidated: As we have
already pointed out, in an occluded region one cannot determine a motion field that maps one image
onto another, because the scene is not visible in one of the two. Some approaches [11, 4], while also
exploiting motion symmetry, discount occlusions by weighting the data fidelity with a monotonically
decreasing function. The resulting problem is non-convex, and therefore the proposed alternating
minimization techniques can be prone to local minima. An alternate approach [15, 14, 25] is to
formulate joint motion estimation and occlusion detection in a discrete setting, where it is NPhard. Various approximate solutions using combinatorial optimization require fine quantization and,
therefore, suffer from a large number of labels which results in loose approximation bounds. Another
class of methods uses the motion estimation residual to classify a location as occluded or visible
wither with a direct threshold on the residual [30] or with a more elaborate probabilistic model [24].
In each case, the resulting optimization is non-convex.
1.2
Evaluation
Optical flow estimation is a mature area of computer vision, and benchmark datasets have been developed, e.g., [2]. Unfortunately, no existing benchmark provides ground truth for occluded regions,
nor a scoring mechanism to evaluate occlusion detection performance. Motion estimates are scored
even in the occluded regions, where the data does not support them. Since our primary goal is to
detect occlusions, we have produced a new benchmark by taking a subset of the training data in the
Middlebury dataset, and hand-labeled occluded regions. We then use the same evaluation method
of the Middlebury for the (ground truth) regions that are co-visible in at least two images. This
provides a motion estimation score. Then, we provide a separate score for occlusion detection, in
terms of precision-recall curves.
2
Joint Occlusion Detection and Optical Flow Estimation
In this section, we show how the assumptions (a)-(b) can be used to formulate occlusion detection
and optical flow estimation as a joint optimization problem. Let I : D ? R2 ? R+ ? R+ ; (x, t) 7?
I(x, t) be a grayscale time-varying image defined on a domain D. Under the assumptions (a)-(b),
the relation between two consecutive frames in a video {I(x, t)}Tt=0 is given by
I(w(x, t), t + dt) + n(x, t), x ? D\?(t; dt)
I(x, t) =
(1)
?(x, t), x ? ?(t; dt)
.
where w : D ? R+ ? R2 ; x 7? w(x, t) = x + v(x, t) is the domain deformation mapping
I(x, t) onto I(x, t + dt) everywhere except at occluded regions. Usually optical flow denotes the
.
incremental displacement v(x, t) = w(x, t) ? x. The occluded region ? can change over time
depending on the temporal sampling interval dt and is not necessarily simply-connected; so even if
we call ? the occluded region (singular), it is understood that it can be made of several disconnected
portions. Inside ?, the image can take any value ? : ? ? R+ ? R+ that is in general unrelated to
I(w(x), t + dt)|x?? . In the limit dt ? 0, ?(t; dt) = ?. Because of (almost-everywhere) continuity
of the scene and its motion (i), and because the additive term n(x, t) compounds the effects of a
large number of independent phenomena3 and therefore we can invoke the Law of Large Numbers
(ii), in general we have that
IID
(i) lim ?(t; dt) = ?, and (ii) n ? N (0, ?)
dt?0
(2)
3
n(x, t) collects all unmodeled phenomena including deviations from Lambertian reflection, illumination
changes, quantization error, sensor noise, and later also linearization error. It does not capture occlusions, since
those are explicitly modeled.
2
i.e., the additive uncertainty is normally distributed in space and time with an isotropic and small
variance ? > 0. We define the residual e : D ? R on the entire image domain x ? D, via
n(x, t), x ? D\?
.
(3)
e(x, t; dt) = I(x, t) ? I(w(x, t), t + dt) =
?(x, t) ? I(w(x, t), t + dt), x ? ?
which we can write as the sum of two terms, e1 : D ? R and e2 : D ? R, also defined on the
entire domain D in such a way that
.
e1 (x, t; dt) = ?(x, t) ? I(w(x, t), t + dt), x ? ?
(4)
.
e2 (x, t; dt) = n(x, t),
x ? D\?.
Note that e2 is undefined in ?, and e1 is undefined in D\?, in the sense that they can take any value
there, including zero, which we will assume henceforth. We can then write, for any x ? D,
I(x, t) = I(w(x, t), t + dt) + e1 (x, t; dt) + e2 (x, t; dt)
4
(5)
4
and note that, because of (i) e1 is large but sparse, while because of (ii) e2 is small but dense . We
will use this as an inference criterion for w, seeking to optimize a data fidelity term that minimizes
the number of nonzero elements of e1 (a proxy of the area of ?), and the negative log-likelihood of
n.
1
.
?data (w, e1 ) = ke1 kL0 (D) + ke2 kL2 (D) subject to (5)
(6)
?
1
= kI(x, t) ? I(w(x, t), t + dt) ? e1 kL2 (D) + ke1 kL0 (D)
?
. R
.
where kf kL0 (D) = |{x ? D|f (x) 6= 0}| and kf kL2 (D) = D |f (x)|2 dx. Unfortunately, we do
not know anything about e1 other than the fact that it is sparse, and that what we are looking for is
?(?) ? e1 , where ? : D ? R+ is the characteristic function that is non-zero when x ? ?, i.e.,
where the occlusion residual is non-zero. So, the data fidelity term depends on w but also on the
characteristic function of the occlusion domain ?.5 For a sufficiently small dt, we can approximate,
for any x ? D\?,
I(x, t + dt) = I(x, t) + ?I(x, t)v(x, t) + n(x, t)
(9)
where the linearization error has been incorporated into the uncertainty term n(x, t). Therefore,
following the same previous steps, we have
?data (v, e1 ) = k?Iv + It ? e1 kL2 (D) + ?ke1 kL0 (D) .
(10)
Since we typically do not know the variance ? of the process n, we will treat it as a tuning parameter, and because ?data or ??data yield the same minimizer, we have attributed the multiplier ? to
the second term. In addition to the data term, because the unknown v is infinite-dimensional and
the problem is ill-posed, we need to impose regularization, for instance by requiring that the total
variation (TV) be small
?reg (v) = ?kv1 kT V + ?kv2 kT V
(11)
where v1 and v2 are the first and second components of the optical flow v, ? is a multiplier factor to
weight the strength of the regularizer and the weighted isotropic TV norm is defined by
Z q
(g1 (x)?x f (x))2 + (g2 (x)?y f (x))2 dx,
kf kT V (D) =
D
4
Sparse stands for almost everywhere zero on D. Similarly, dense stands for almost everywhere non-zero.
In a digital image, both domains D and ? are discretized into a lattice, and dt is fixed. Therefore, spatial
and temporal derivative operators are approximated, typically, by first-order differences. We use the formal
notation
?
?T
1
I x+
, t ? I(x, t) ?
. ?
?
0
(7)
?I(x, t) = ?
?
?
0
, t ? I(x, t)
I x+
1
.
It (x, t) = I(x, t + dt) ? I(x, t).
(8)
5
3
where g1 (x) ? exp(??|?x I(x)|) and g2 (x) ? exp(??|?y I(x)|); ? is a normalizing factor. TV
is desirable in the context of occlusion detection because it does not penalize motion discontinuities
significantly. The overall problem can then be written as the minimization of the cost functional
? = ?data + ?reg , which is
v?1 , v?2 , e?1 = arg min k?Iv + It ? e1 k2L2 (D) + ?ke1 kL0 (D) + ?kv1 kT V (D) + ?kv2 kT V (D)
v1 ,v2 ,e1 |
{z
}
?(v1 ,v2 ,e1 )
(12)
In a digital image, the domain D is quantized into an M ? N lattice ?, so we can write (12) in
matrix form as:
v?1 , v?2 , e?1 = arg min
v1 ,v2 ,e1
1
kA[v1 , v2 , e1 ]T + bk2`2 + ?ke1 k`0 + ?kv1 kT V + ?kv2 kT V
2
(13)
where e1 ? RM N is the vector obtained from stacking the values of e1 (x, t) on the lattice ? on
top of one another (column-wise), and similarly with the vector field components {v1 (x, t)}x??
and {v2 (x, t)}x?? stacked into M N -dimensional vectors v1 , v2 ? RM N . The spatial derivative matrix A is given by A = [diag(?x I) diag(?y I) ? I], where I is the M N ? M N
identity matrix, and the temporal derivative values
{It (x, t)}x?? are stacked into b. For finitep
MN
hu, ui, kuk`0 = |{ui |ui 6= 0}| and kukT V =
dimensional vectors u ? R
, kuk`2 =
Pp
((g1 )i (ui+1 ? ui ))2 + ((g2 )i (ui+M ? ui ))2 where g1 and g2 are the stacked versions of
{g1 (x)}x?? and {g2 (x)}x?? .
In practice, (13) is NP-hard. Therefore, as customary, we relax it by minimizing the weighted-`1
norm of e1 , instead of `0 , such that
v?1 , v?2 , e?1 = arg min
v1 ,v2 ,e1
1
kA[v1 , v2 , e1 ]T + bk2`2 + ?kW e1 k`1 + ?kv1 kT V + ?kv2 kT V
2
(14)
P
where W is a diagonal weight matrix and kuk`1 =
|ui |. When W is the identity, (14) becomes a
standard convex relaxation of (13) and its globally optimal solution can be reached efficiently [27].
However, the `0 norm can also be approximated by reweighting `1 , as proposed by Candes et al. [5],
by setting the diagonal elements of W to wi ? 1/(|(e1 )i | + ), small, after each iteration of (14).
The data term of the standard (unweighted) relaxation of (13) can be interpreted as a Huber norm
[10]. We favor the more general (14) as the resulting estimate of e1 is more stable and sparse.
The model (9) is valid to the extent in which dt is sufficiently small relative to v (or v sufficiently
slow relative to dt), so the linearization error does not alter the statistics of the residual n. When this
is not the case, remedies must be enacted to restore proper sampling conditions [22] and therefore
differentiate contributions to the residual coming from sampling artifacts (aliasing), rather than occlusions. This can be done by solving (14) in scale-space, as customary, with coarser scales used to
initialize v?1 , v?2 so the increment is properly sampled, and the occlusion term e1 added at the finest
scale.
The residual term e1 in (5) have been characterized in some literature as modeling illumination
changes [21, 16, 26, 13]. Note that, even if the model (5) appears similar, the priors on e1 are rather
different: Sparsity in our case, smoothness in theirs. While sparsity is clearly motivated by (i), for
illumination changes to be properly modeled, a reflectance function is necessary, which is absent in
all models of the form (5) (see [23].)
3
Optimization with Nesterov?s Algorithm
In this section, we describe an efficient algorithm to solve (14) based on Nesterov?s first order scheme
[17] which provides O(1/k 2 ) convergence in k iterations, whereas for standard gradient descent, it
is O(1/k), a considerable advantage for a large scale problem such as (14). To simplify the notation
.
.
we let (e1 )i = wi (e1 )i , so that A = [diag(?x I) diag(?y I) ?W ?1 ]. We then have
4
Initialize v10 , v20 , e01 . For k ? 0
1. Compute ??(v1k , v2k , ek1 )
2. Compute ?k = 1/2(k + 1), ?k = 2/(k + 3)
3. Compute yk = [v1k , v2k , ek1 ]T ? (1/L)??(v1k , v2k , ek1 ),
P
4. Compute zk = [v10 , v20 , e01 ]T ? (1/L) ki=0 ?i ??(v1i , v2i , ei1 ),
5. Update [v1k , v2k , ek1 ]T = ?k zk + (1 ? ?k )yk .
Stop when the solution converges.
In order to implement this scheme, we need to address the nonsmooth nature of `1 in the computation
of ?? [18], a common problem in sparse optimization [3]. We write ?(v1 , v2 , e1 ) as
?(v1 , v2 , e1 ) = ?1 (v1 , v2 , e1 ) + ??2 (e1 ) + ??3 (v1 ) + ??4 (v2 ),
and compute the gradient of each term separately. ?v1 ,v2 ,e1 ?1 (v1 , v2 , e1 ) is straightforward:
?v1 ,v2 ,e1 ?1 (v1 , v2 , e1 ) = AT A[v1 , v2 , e1 ]T + AT b.
The other three terms require smoothing. ?2 (e1 ) = ke1 k`1 can be rewritten as ?2 (e1 ) =
maxkuk? ?1 hu, e1 i in terms of its conjugate. [18] proposes a smooth approximation
1
(15)
?2? (e1 ) = max hu, e1 i ? ?kuk2`2 ,
2
kuk? ?1
and shows that (15) is differentiable and ?e1 ?2? (e1 ) = u? , where u? is the solution of (15):
?1
? (e1 )i , |(e1 )i | < ?,
u?i =
(16)
sgn((e1 )i ), otherwise.
Following [3], ?v1 ?3 is given by ?v1 ?3? (v1 ) = GT u? where G = [G1 , G2 ]T , G1 and G2 are
weighted horizontal and vertical differentiation operators , and u? has the form [u1 , u2 ] where
?1
? (G1,2 v1 )i ,
k[(G1 v1 )i (G2 v1 )i ]T k`2 < ?,
1,2
ui =
(17)
k[(G1 v1 )i (G2 v1 )i ]T k?1
otherwise.
`2 (G1,2 v1 )i ,
?v2 ?4 can be computed in the same way. Once we have computed each term, ??(v1 , v2 , e1 ) is
??(v1 , v2 , e1 ) = ??1 + [??e1 ?2 , ??v1 ?3 , ??v2 ?4 ]T .
(18)
We also need the Lipschitz constant L to compute the auxiliary variables yk and zk to minimize ?.
Since kGT Gk2 is bounded above [7] by 8, given the coefficients ? and ?, L is given by
L = max(?, 8?)/? + kAT Ak2 .
A crucial element of the scheme is the selection of ?. It trades off accuracy and speed of convergence. A large ? yields a smooth solution, which is undesirable when minimizing the `1 norm. A
small ? causes slow convergence. We have chosen ? empirically, although the continuation algorithm proposed in [3] could be employed to adapt ? during convergence.
4
Experiments
To evaluate occlusion detection (Sect. 1.2), we start from [2] and generate occlusion maps as follows: for each training sequence, the residual computed from the given ground truth motion is used
as a discriminant to determine ground truth occlusions, fixing obvious errors in the occlusion maps
by hand. We therefore restrict the evaluation of motion to the co-visible regions, and evaluate occlusion detection as a standard binary classification task. We compare our algorithm to [29] and
[14], the former is an example of robust motion estimation and the latter is a representative of the
approaches described in Sect. 1.1.
In our implementation6 , we first solve (14) with standard relaxation (W is the identity) and then
with reweighted-`1 . To handle large motion, we use a pyramid with scale factor 0.5 and up to 4
levels; ? and ? are fixed at 0.002 and 0.001 (Flower Garden) and 0.0006 and 0.0003 (Middlebury)
respectively. To make comparison with [29] fair, we modify the code provided online7 to include
6
7
The source code is available at http://vision.ucla.edu/~ayvaci/occlusion-detection/
http://gpu4vision.icg.tugraz.at
5
anisotropic regularization (Fig. 1). Note that no occlusion is present in the residual of the motion
field computed by TV-L1, and subsequently the motion estimates are less precise around occluding
boundaries (top-left corner of the Flower Garden, plane in the left in Venus).
Figure 1: Comparison with TV-L1 [29] on ?Venus? from [2] and ?Flower Garden.? The first column
shows the motion estimates by TV-L1, color-coded as in [29], the second its residual I(x, t) ?
I(w(x), t + dt); the third shows our motion estimates, and the fourth our residual e1 defined in (14).
Other frames of the Flower Garden sequence are shown in Fig. 2, where we have regularized the
occluded region by minimizing a unilateral energy on e1 with graph-cuts. We have also compared
Figure 2: Motion estimates for more frames of the Flower Garden sequence (left), residual e (middle), and occluded region (right).
motion estimates obtained with our method and [29] in the co-visible regions for the Middlebury
dataset (Table 1). Since occlusions can only be determined at the finest scale absent proper sampling conditions, in this experiment we minimize the same functional of [29] at coarse scales, and
switch to (14) at the finest scale. To evaluate occlusion detection performance, we again use the
Middlebury, and compare e1 to ground truth occlusions using precision/recall curves (Fig. 3) and
average precision values (Table 2). We also show the improvement in detection performance when
we use reweighted-`1 , in Table 2. We have compared our occlusion detection results to [14], using the code provided online by the authors (Table 3). Comparing motion estimates gives an unfair
6
AAE (ours)
AAE (L1TV)
AEPE (ours)
AEPE (L1TV)
Venus
4.37
5.28
0.30
0.33
RubberWhale
5.42
4.49
0.18
0.13
Hydrangea
2.35
2.44
0.19
0.20
Grove2
2.32
3.45
0.16
0.24
Grove3
5.72
7.66
0.59
0.74
Urban2
3.60
3.57
0.39
0.46
Urban3
6.41
7.12
0.84
0.89
Table 1: Quantitative comparison of our algorithm with TV-L1 [29]. Average Angular Error (AAE)
and Average End Point Error (AEPE) of motion estimates in co-visible regions.
Venus
1
0.9
0.8
0.7
precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
recall
0.6
0.7
0.8
0.9
1
0.6
0.7
0.8
0.9
1
0.6
0.7
0.8
0.9
1
0.6
0.7
0.8
0.9
1
0.6
0.7
0.8
0.9
1
0.6
0.7
0.8
0.9
1
RubberWhale
1
0.9
0.8
0.7
precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
recall
Hydrangea
1
0.9
0.8
0.7
precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
recall
Grove2
1
0.9
0.8
0.7
precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
recall
Grove3
1
0.9
0.8
0.7
precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
recall
Urban2
1
0.9
0.8
0.7
precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
recall
Figure 3: Left to right: Representative samples of motion estimates from the Middlebury dataset,
labeled ground-truth occlusions, error term estimate e1 , and precision-recall curves for our occlusion
detection.
advantage to our algorithm because their approach is based on quantized disparity values, yielding
lower accuracy.
7
`1
reweighted-`1
Venus
0.67
0.69
Rubber Whale
0.48
0.49
Hydrangea
0.55
0.57
Grove2
0.70
0.70
Grove3
0.60
0.61
Urban2
0.72
0.73
Urban3
0.80
0.80
Table 2: Average precision of our approach on Middlebury data with and without re-weighting.
It takes 186 seconds for a Matlab/C++ implementation of Nesterov?s algorithm to converge to a
solution on a 288 ? 352 frame from Flower Garden sequence. We have also compared Nesterov?s
algorithm to split-Bregman?s method [9] for minimization of (14) in terms of convergence speed and
reported the results in [20].
Precision [14]
Recall [14]
Precision(ours)
Venus
0.61
0.66
0.69
RubberWhale
0.46
0.20
0.91
Hydrangea
0.68
0.20
0.96
Grove2
0.72
0.55
0.96
Grove3
0.79
0.45
0.86
Urban2
0.26
0.50
0.95
Urban3
0.56
0.51
0.94
Table 3: Comparison with [14] on Middlebury. Since Kolmogorov et al. provide a binary output,
we display our precision at their same recall value.
5
Discussion
We have presented an algorithm to detect occlusions and establish correspondence between two images. It leverages on a formulation that, starting from standard assumptions (Lambertian reflection,
constant diffuse illumination), arrives at a convex optimization problem. Our approach does not assume a rigid scene, nor a single moving object. It also does not assume that the occluded region
is simply connected: Occlusions in natural scenes can be very complex (see Fig. 3) and should
therefore, in general, not be spatially regularized. The fact that occlusion detection reduces to a
two-phase segmentation of the domain into either occluded (?) or visible (D\?) should not confuse
the reader familiar with the image segmentation literature whereby two-phase segmentation of one
object (foreground) from the background can be posed as a convex optimization problem [6], but
breaks down in the presence of multiple objects, or ?phases.? Note that in [6] the problem can be
made convex only in e1 , but not jointly in e1 and v. We focus on inter-frame occlusion detection;
temporal consistency of occlusion ?layers? was addressed in [12].
The limitations of our approach stand mostly in its dependency from the regularization coefficients
? and ?. In the absence of some estimate of the variance coefficient ?, one is left with tuning it by
trial-and-error. Similarly, ? is a parameter that, like in any classification problem, trades off missed
detections and false alarms, and therefore no single value is ?optimal? in any meaningful sense.
These limitations are shared by most variational optical flow estimation algorithms.
Acknowledgement: This work was supported by AFOSR FA9550-09-1-0427, ARO 56765-CI, and
ONR N00014-08-1-0414.
References
[1] L. Alvarez, R. Deriche, T. Papadopoulo, and J. S?anchez. Symmetrical dense optical flow estimation with
occlusions detection. International Journal of Computer Vision, 75(3):371?385, 2007.
[2] S. Baker, D. Scharstein, J. Lewis, S. Roth, M. Black, and R. Szeliski. A database and evaluation methodology for optical flow. In Proceedings of the International Conference on Computer Vision, volume 5,
2007.
[3] S. Becker, J. Bobin, and E. Candes. Nesta: A fast and accurate first-order method for sparse recovery.
Arxiv preprint arXiv, 904, 2009.
[4] R. Ben-Ari and N. Sochen. Variational stereo vision with sharp discontinuities and occlusion handling.
ICCV. IEEE Computer Society, pages 1?7, 2007.
[5] E. Candes, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted 1 minimization. Journal of Fourier
Analysis and Applications, 14(5):877?905, 2008.
[6] T. Chan, S. Esedoglu, and M. Nikolova. Algorithms for finding global minimizers of denoising and
segmentation models. SIAM J. Appl. Math, 66(1632-1648):1, 2006.
8
[7] J. Dahl, P. Hansen, S. Jensen, and T. Jensen. Algorithms and software for total variation image reconstruction via first-order methods. Numerical Algorithms, pages 67?92, 2009.
[8] J. J. Gibson. The ecological approach to visual perception. LEA, 1984.
[9] T. Goldstein and S. Osher. The split Bregman method for L1 regularized problems. SIAM Journal on
Imaging Sciences, 2(2):323?343, 2009.
[10] P. Huber and E. Ronchetti. Robust statistics. John Wiley & Sons Inc, 2009.
[11] S. Ince and J. Konrad. Occlusion-aware optical flow estimation. IEEE Transactions on Image Processing,
17(8):1443?1451, 2008.
[12] J. Jackson, A. J. Yezzi, and S. Soatto. Dynamic shape and appearance modeling via moving and deforming
layers. International Journal of Computer Vision, 2008.
[13] Y. Kim, A. Mart??nez, and A. Kak. Robust motion estimation under varying illumination. Image and
Vision Computing, 23(4):365?375, 2005.
[14] V. Kolmogorov and R. Zabih. Computing visual correspondence with occlusions via graph cuts. In
International Conference on Computer Vision, volume 2, pages 508?515. Citeseer, 2001.
[15] K. Lim, A. Das, and M. Chong. Estimation of occlusion and dense motion fields in a bidirectional
Bayesian framework. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 712?718,
2002.
[16] S. Negahdaripour. Revised definition of optical flow: Integration of radiometric and geometric cues for
dynamic scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 961?
979, 1998.
[17] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O
(1/k2). In Doklady AN SSSR, volume 269, pages 543?547, 1983.
[18] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, 2005.
[19] M. Proesmans, L. Van Gool, and A. Oosterlinck. Determination of optical flow and its discontinuities
using a non-linear diffusion. In European Conference on Computer Vision, 1994.
[20] M. Raptis, A. Ayvaci, and S. Soatto. Occlusion Detection and Motion Estimation via Convex Optimization. Technical report, UCLA CAM 10-36, June 2010.
[21] D. Shulman and J. Herve. Regularization of discontinuous flow fields. In Proc. of Workshop on Visual
Motion, pages 81?86, 1989.
[22] S. Soatto. Steps Towards a Theory of Visual Information. Technical report, UCLA-CSD100028, September 2010.
[23] S. Soatto, A. J. Yezzi, and H. Jin. Tales of shape and radiance in multiview stereo. In Intl. Conf. on Comp.
Vision, pages 974?981, October 2003.
[24] C. Strecha, R. Fransens, and L. Van Gool. A probabilistic approach to large displacement optical flow
and occlusion detection. In ECCV Workshop SMVP, pages 71?82. Springer, 2004.
[25] J. Sun, Y. Li, S. Kang, and H. Shum. Symmetric stereo matching for occlusion handling. In IEEE
Conference on Computer Vision and Pattern Recognition, volume 2, page 399, 2005.
[26] C. Teng, S. Lai, Y. Chen, and W. Hsu. Accurate optical flow computation under non-uniform brightness
variations. Computer vision and image understanding, 97(3):315?346, 2005.
[27] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), 58(1):267?288, 1996.
[28] A. Verri and T. Poggio. Motion field and optical flow: Qualitative properties. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 11(5):490?498, 1989.
[29] A. Wedel, T. Pock, C. Zach, H. Bischof, and D. Cremers. An improved algorithm for TV-L1 optical flow.
In Statistical and Geometrical Approaches to Visual Motion Analysis: International Dagstuhl Seminar.
Springer, 2009.
[30] J. Xiao, H. Cheng, H. Sawhney, C. Rao, M. Isnardi, et al. Bilateral filtering-based optical flow estimation
with occlusion detection. Lecture Notes in Computer Science, 3951:211, 2006.
9
| 4118 |@word trial:1 middle:1 version:1 advantageous:1 norm:5 hu:3 citeseer:1 ronchetti:1 brightness:1 initial:1 series:1 score:2 disparity:1 nesta:1 shum:1 ours:3 ala:1 existing:1 ka:2 comparing:1 dx:2 written:1 must:1 finest:3 e01:2 john:1 visible:8 additive:2 numerical:1 benign:1 shape:2 visibility:2 kv1:4 strecha:1 update:1 intelligence:3 cue:1 plane:2 isotropic:2 fa9550:1 detecting:1 provides:3 quantized:2 location:2 coarse:1 math:1 mathematical:1 direct:1 become:1 qualitative:1 inside:1 parallax:1 bobin:1 inter:1 huber:2 nor:2 aliasing:1 discretized:1 globally:3 decreasing:1 v2i:1 becomes:1 provided:2 estimating:1 unrelated:1 notation:2 bounded:1 baker:1 what:1 interpreted:1 minimizes:1 developed:1 finding:1 differentiation:1 guarantee:1 temporal:5 quantitative:1 tackle:1 doklady:1 rm:2 k2:1 whatever:1 normally:1 understood:1 local:1 treat:1 modify:1 limit:3 pock:1 middlebury:8 k2l2:1 solely:1 black:1 collect:1 appl:1 co:4 practice:1 implement:1 kat:1 sawhney:1 displacement:2 area:3 gibson:1 significantly:2 projection:1 boyd:1 matching:1 refers:1 onto:3 close:1 cannot:4 convenience:1 operator:2 put:1 context:1 selection:2 undesirable:1 optimize:1 map:3 roth:1 straightforward:1 regardless:1 starting:1 independently:1 convex:11 formulate:2 recovery:1 enacted:1 grove2:4 jackson:1 handle:1 variation:3 increment:1 programming:1 us:1 ego:2 velocity:1 approximated:3 recognition:2 element:3 cut:2 coarser:1 labeled:2 database:1 preprint:1 capture:1 region:21 connected:2 sun:2 sect:6 trade:2 kv2:4 yk:3 dagstuhl:1 ui:9 occluded:19 nesterov:7 dynamic:2 cam:1 solving:1 joint:3 various:1 kolmogorov:2 regularizer:1 stacked:3 fast:2 describe:1 posed:3 solve:2 relax:1 otherwise:2 favor:1 statistic:2 g1:11 jointly:1 online:1 differentiate:1 advantage:2 differentiable:1 sequence:4 aro:1 reconstruction:1 coming:1 los:1 exploiting:1 convergence:7 intl:1 incremental:1 converges:1 ben:1 object:6 depending:1 tale:1 v10:2 fixing:1 auxiliary:1 c:1 sssr:1 discontinuous:2 correct:1 kgt:1 subsequently:1 sgn:1 enable:2 argued:1 require:3 sufficiently:3 around:1 ground:6 exp:2 mapping:1 gk2:1 consecutive:1 radiance:1 estimation:16 proc:1 combinatorial:1 label:1 hansen:1 ek1:4 v1i:1 weighted:3 minimization:7 clearly:1 sensor:1 ke1:6 rather:2 shrinkage:1 varying:2 validated:1 focus:3 june:1 properly:2 improvement:1 methodological:1 likelihood:1 kim:1 detect:2 sense:2 inference:1 oosterlinck:1 rigid:1 minimizers:1 entire:2 typically:2 relation:1 issue:1 fidelity:3 ill:1 overall:1 arg:3 classification:2 proposes:1 spatial:3 integration:1 smoothing:1 initialize:2 ak2:1 field:9 once:2 aware:1 sampling:5 whale:1 kw:1 alter:1 foreground:1 np:1 nonsmooth:1 simplify:1 report:2 deriche:1 simultaneously:1 familiar:1 phase:3 occlusion:49 detection:23 evaluation:6 chong:1 violation:1 arrives:1 yielding:1 light:2 undefined:2 kt:9 accurate:2 bregman:2 necessary:1 poggio:1 herve:1 unless:1 iv:2 re:1 deformation:2 complicates:1 instance:1 classify:1 modeling:4 column:2 rao:1 lattice:3 cost:1 stacking:1 deviation:1 subset:1 uniform:1 v20:2 reported:1 dependency:1 international:5 siam:2 probabilistic:2 invoke:1 off:2 again:1 satisfied:1 slowly:1 henceforth:1 corner:1 conf:1 derivative:3 forgoing:1 return:1 li:1 coefficient:3 inc:1 cremers:1 explicitly:3 depends:1 later:1 kl0:5 break:1 bilateral:1 portion:2 reached:1 start:1 candes:3 raptis:2 contribution:1 minimize:2 accuracy:2 papadopoulo:1 variance:3 characteristic:2 efficiently:1 yield:2 conceptually:1 bayesian:1 produced:1 iid:1 comp:1 icg:1 definition:1 energy:1 kl2:4 pp:1 obvious:1 e2:5 attributed:1 static:1 sampled:2 stop:1 dataset:3 hsu:1 urban2:4 invalidated:1 recall:11 lim:2 kukt:1 color:1 maxkuk:1 segmentation:4 goldstein:1 appears:1 bidirectional:1 dt:27 methodology:1 alvarez:1 improved:1 verri:1 formulation:1 done:1 angular:1 hand:2 horizontal:1 reweighting:1 widespread:1 continuity:1 artifact:1 effect:1 requiring:1 multiplier:2 remedy:1 former:1 soatto:6 assigned:1 regularization:4 alternating:1 spatially:1 nonzero:1 symmetric:1 reweighted:4 konrad:1 during:1 whereby:1 anything:1 kak:1 criterion:1 multiview:1 tt:1 motion:39 stefano:1 reflection:5 l1:6 ince:1 geometrical:1 image:19 variational:3 wise:1 ari:1 common:2 functional:2 empirically:1 volume:4 anisotropic:1 theirs:1 smoothness:1 tuning:2 unconstrained:1 consistency:1 similarly:4 pointed:1 moving:5 stable:1 surface:1 gt:1 chan:1 compound:1 n00014:1 ecological:1 binary:2 onr:1 shulman:1 scoring:1 minimum:1 impose:1 ke2:1 employed:1 determine:2 converge:1 monotonically:1 ii:3 multiple:1 desirable:1 reduces:1 smooth:4 technical:2 adapt:2 characterized:1 determination:1 lai:1 e1:58 coded:1 regression:1 vision:12 enhancing:1 arxiv:2 iteration:2 pyramid:1 penalize:1 lea:1 addition:1 whereas:1 fine:1 separately:1 interval:1 background:1 addressed:1 singular:1 source:3 crucial:1 subject:1 mature:1 flow:24 inconsistent:2 call:1 leverage:1 backwards:1 presence:1 split:2 enough:3 nikolova:1 switch:1 specular:1 restrict:1 lasso:1 venus:6 angeles:1 absent:2 motivated:1 defense:1 granted:1 becker:1 stereo:3 suffer:1 unilateral:1 cause:1 matlab:1 unimportant:1 discount:1 zabih:1 continuation:1 generate:1 http:2 algorithmically:1 tibshirani:1 discrete:1 write:4 key:1 threshold:1 kuk:4 dahl:1 diffusion:1 backward:1 v1:30 imaging:1 graph:2 relaxation:3 sum:1 everywhere:5 uncertainty:2 fourth:1 almost:4 alper:1 reasonable:1 urban3:3 reader:1 missed:1 bound:1 layer:3 ki:2 guaranteed:1 display:1 correspondence:4 aae:3 cheng:1 strength:1 scene:10 software:1 diffuse:2 ucla:4 u1:1 speed:2 fourier:1 min:3 expanded:1 optical:22 infinitesimally:1 embrace:1 developing:1 tv:8 alternate:1 disconnected:1 conjugate:1 son:1 wi:2 v2k:4 osher:1 iccv:1 taken:1 rubber:1 loose:1 mechanism:1 sochen:1 needed:1 know:2 end:1 yezzi:2 grove3:4 available:1 rewritten:1 lambertian:6 v2:22 customary:3 denotes:1 michalis:1 top:2 include:1 tugraz:1 wakin:1 reflectance:3 establish:1 society:2 seeking:1 already:1 added:1 primary:1 diagonal:2 september:1 gradient:2 separate:1 ei1:1 extent:1 discriminant:1 reason:1 code:3 modeled:3 minimizing:3 unfortunately:2 mostly:1 october:1 negative:1 implementation:1 proper:2 unknown:1 vertical:1 anchez:1 datasets:3 revised:1 benchmark:5 descent:1 jin:1 looking:1 incorporated:1 precise:1 frame:5 sharp:1 unmodeled:1 hallucinated:1 bischof:1 california:1 kang:1 discontinuity:4 address:1 usually:1 perception:2 flower:6 pattern:4 sparsity:3 including:4 max:2 video:1 garden:6 gool:2 royal:1 critical:2 natural:1 restore:1 regularized:3 residual:12 mn:1 representing:1 scheme:4 picture:1 prior:1 literature:4 acknowledgement:1 geometric:1 kf:3 understanding:1 relative:4 law:1 afosr:1 lecture:1 limitation:2 filtering:1 digital:2 proxy:1 xiao:1 bk2:2 eccv:1 prone:1 supported:1 formal:1 szeliski:1 taking:2 sparse:7 distributed:1 van:2 curve:3 boundary:1 stand:3 valid:1 unweighted:1 forward:2 made:3 author:1 far:1 transaction:4 scharstein:1 approximate:2 wedel:1 global:1 symmetrical:1 grayscale:1 table:7 nature:1 zk:3 robust:3 symmetry:1 necessarily:1 complex:1 european:1 domain:11 diag:4 da:1 dense:5 noise:1 scored:1 alarm:1 fair:1 rubberwhale:3 fig:4 representative:2 elaborate:1 nphard:1 slow:3 wiley:1 precision:14 seminar:1 zach:1 unfair:1 weighting:2 third:1 down:1 kuk2:1 jensen:2 r2:2 normalizing:1 workshop:2 portrays:1 quantization:2 false:1 v1k:4 ci:1 linearization:3 illumination:9 confuse:1 chen:1 hydrangea:4 simply:3 appearance:1 nez:1 visual:5 g2:9 u2:1 springer:2 truth:6 minimizer:1 lewis:1 mart:1 goal:1 identity:3 towards:1 lipschitz:1 absence:1 considerable:1 change:5 hard:1 shared:1 infinite:1 except:1 determined:1 denoising:1 total:2 teng:1 meaningful:1 deforming:1 occluding:1 support:1 latter:1 illuminant:1 evaluate:4 reg:2 phenomenon:1 handling:3 |
3,444 | 4,119 | b-Bit Minwise Hashing for Estimating Three-Way Similarities
Ping Li
Dept. of Statistical Science
Cornell University
Arnd Christian K?onig
Microsoft Research
Microsoft Corporation
Wenhao Gui
Dept. of Statistical Science
Cornell University
Abstract
Computing1 two-way and multi-way set similarities is a fundamental problem.
This study focuses on estimating 3-way resemblance (Jaccard similarity) using
b-bit minwise hashing. While traditional minwise hashing methods store each
hashed value using 64 bits, b-bit minwise hashing only stores the lowest b bits
(where b ? 2 for 3-way). The extension to 3-way similarity from the prior work
on 2-way similarity is technically non-trivial. We develop the precise estimator
which is accurate and very complicated; and we recommend a much simplified
estimator suitable for sparse data. Our analysis shows that b-bit minwise hashing
can normally achieve a 10 to 25-fold improvement in the storage space required
for a given estimator accuracy of the 3-way resemblance.
1
Introduction
The efficient computation of the similarity (or overlap) between sets is a central operation in a variety
of applications, such as word associations (e.g., [13]), data cleaning (e.g., [40, 9]), data mining
(e.g., [14]), selectivity estimation (e.g., [30]) or duplicate document detection [3, 4]. In machine
learning applications, binary (0/1) vectors can be naturally viewed as sets. For scenarios where the
underlying data size is sufficiently large to make storing them (in main memory) or processing them
in their entirety impractical, probabilistic techniques have been proposed for this task.
Word associations (collocations, co-occurrences)
If one inputs a query NIPS machine learning,
all major search engines will report the number of pagehits (e.g., one reports 829,003), in addition to
the top ranked URLs. Although no search engines have revealed how they estimate the numbers of
pagehits, one natural approach is to treat this as a set intersection estimation problem. Each word can
be represented as a set of document IDs; and each set belongs to a very large space ?. It is expected
that |?| > 1010 . Word associations have many other applications in Computational Linguistics [13,
38], and were recently used for Web search query reformulation and query suggestions [42, 12].
Here is another example. Commercial search engines display various form of ?vertical? content
(e.g., images, news, products) as part of Web search. In order to determine from which ?vertical?
to display information, there exist various techniques to select verticals. Some of these (e.g., [29,
15]) use the number of documents the words in a search query occur in for different text corpora
representing various verticals as features. Because this selection is invoked for all search queries
(and the tight latency bounds for search), the computation of these features has to be very fast.
Moreover, the accuracy of vertical selection depends on the number/size of document corpora that
can be processed within the allotted time [29], i.e., the processing speed can directly impact quality.
Now, because of the large number of word-combinations in even medium-sized text corpora (e.g.,
the Wikipedia corpus contains > 107 distinct terms), it is impossible to pre-compute and store the
associations for all possible multi-term combinations (e.g., > 1014 for 2-way and > 1021 for 3-way);
instead the techniques described in this paper can be used for fast estimates of the co-occurrences.
Database query optimization Set intersection is a routine operation in databases, employed for
example during the evaluation of conjunctive selection conditions in the presence of single-column
indexes. Before conducting intersections, a critical task is to (quickly) estimate the sizes of the
intermediate results to plan the optimal intersection order [20, 8, 25]. For example, consider the task
of intersecting four sets of record identifiers: A ? B ? C ? D. Even though the final outcome will
be the same, the order of the join operations, e.g., (A ? B) ? (C ? D) or ((A ? B) ? C) ? D, can
significantly affect the performance, in particular if the intermediate results, e.g., A?B ?C, become
too large for main memory and need to be spilled to disk. A good query plan aims to minimize
1
This work is supported by NSF (DMS-0808864), ONR (YIP-N000140910911) and Microsoft.
the total size of intermediate results. Thus, it is highly desirable to have a mechanism which can
estimate join sizes very efficiently, especially for the lower-order (2-way and 3-way) intersections,
which could potentially result in much larger intermediate results than higher-order intersections.
Duplicate Detection in Data Cleaning: A common task in data cleaning is the identification of
duplicates (e.g., duplicate names, organizations, etc.) among a set of items. Now, despite the fact
that there is considerable evidence (e.g., [10]) that reliable duplicate-detection should be based on
local properties of groups of duplicates, most current approaches base their decisions on pairwise
similarities between items only. This is in part due to the computational overhead associated with
more complex interactions, which our approach may help to overcome.
Clustering Most clustering techniques are based on pair-wise distances between the items to be
clustered. However, there are a number of natural scenarios where the affinity relations are not
pairwise, but rather triadic, tetradic or higher (e.g. [1, 43]). Again, our approach may improve the
performance in these scenarios if the distance measures can be expressed in the form of set-overlap.
Data mining A lot of work in data mining has focused on efficient candidate pruning in the
context of pairwise associations (e.g., [14]), a number of such pruning techniques leverage minwise
hashing to prune pairs of items, but in many contexts (e.g., association rules with more than 2 items)
multi-way associations are relevant; here, pruning based on pairwise interactions may perform much
less well than multi-way pruning.
1.1
Ultra-high dimensional data are often binary
For duplicate detection in the context of Web crawling/search, each document can be represented as
a set of w-shingles (w contiguous words); w = 5 or 7 in several studies [3, 4, 17]. Normally only the
abscence/presence (0/1) information is used, as a w-shingle rarely occurs more than once in a page
if w ? 5. The total number of shingles is commonly set to be |?| = 264 ; and thus the set intersection
corresponds to computing the inner product in binary data vectors of 264 dimensions. Interestingly,
even when the data are not too high-dimensional (e.g., only thousands), empirical studies [6, 23, 26]
achieved good performance using SVM with binary-quantized (text or image) data.
1.2
Minwise Hashing and SimHash
Two of the most widely adopted approaches for estimating set intersections are minwise hashing [3,
4] and sign (1-bit) random projections (also known as simhash) [7, 34], which are both special
instances of the general techniques proposed in the context of locality-sensitive hashing [7, 24].
These techniques have been successfully applied to many tasks in machine learning, databases, data
mining, and information retrieval [18, 36, 11, 22, 16, 39, 28, 41, 27, 5, 2, 37, 7, 24, 21].
Limitations of random projections The method of random projections (including simhash) is
limited to estimating pairwise similarities. Random projections convert any data distributions to
(zero-mean) multivariate normals, whose density functions are determined by the covariance matrix
which contains only the pairwise information of the original data. This is a serious limitation.
1.3
Prior work on b-Bit Minwise Hashing
Instead of storing each hashed value using 64 bits as in prior studies, e.g., [17], [35] suggested to
store only the lowest b bits. [35] demonstrated that using b = 1 reduces the storage space at least
by a factor of 21.3 (for a given accuracy) compared to b = 64, if one is interested in resemblance
? 0.5, the threshold used in prior studies [3, 4]. Moreover, by choosing the value b of bits to be
retained, it becomes possible to systematically adjust the degree to which the estimator is ?tuned?
towards higher similarities as well as the amount of hashing (random permutations) required.
[35] concerned only the pairwise resemblance. To extend it to the multi-way case, we have to solve
new and challenging probability problems. Compared to the pairwise case, our new estimator is
significantly different. In fact, as we will show later, estimating 3-way resemblance requires b ? 2.
1.4 Notation
a
13
f
f
a
1
3
a
a12
23
f2
s13
r
1
r3
s
s
s12
23
r2
Figure 1: Notation for 2-way and 3-way set intersections.
Fig. 1 describes the notation used in 3-way intersections for three sets S1 , S2 , S3 ? ?, |?| = D.
? f1 = |S1 |, f2 = |S2 |, f3 = |S3 |.
? a12 = |S1 ? S2 |, a13 = |S1 ? S3 |, a23 = |S2 ? S3 |, a = a123 = |S1 ? S2 ? S3 |.
f1
,
D
? r1 =
r2 =
f2
,
D
r3 =
f3
.
D
s12 =
a12
,
D
s13 =
a13
,
D
s23 =
a23
,
D
s = s123 =
a
.
D
? u = r1 + r2 + r3 ? s12 ? s13 ? s23 + s.
We define three 2-way resemblances (R12 , R13 , R23 ) and one 3-way resemblance (R) as:
R12 =
|S1 ? S2 |
|S1 ? S3 |
|S2 ? S3 |
, R13 =
, R23 =
,
|S1 ? S2 |
|S1 ? S3 |
|S2 ? S3 |
R = R123 =
|S1 ? S2 ? S3 |
.
|S1 ? S2 ? S3 |
(1)
which, using our notation, can be expressed in various forms:
aij
sij
=
, i 6= j,
fi + fj ? aij
ri + rj ? sij
a
s
s
R=
=
= .
f1 + f2 + f3 ? a12 ? a23 ? a13 + a
r1 + r2 + r3 ? s12 ? s23 ? s13 + s
u
Rij =
(2)
(3)
Note that, instead of a123 , s123 , R123 , we simply use a, s, R. When the set sizes, fi = |Si |, can be
assumed to be known, we can compute resemblances from intersections and vice versa:
aij =
Rij
(fi + fj ),
1 + Rij
a=
R
(f1 + f2 + f3 ? a12 ? a13 ? a23 ) .
1?R
Thus, estimating resemblances and estimating intersection sizes are two closely related problems.
1.5
Our Main Contributions
? We derive the basic probability formula for estimating 3-way resemblance using b-bit hashing. The derivation turns out to be significantly much more complex than the 2-way case.
This basic probability formula naturally leads to a (complicated) estimator of resemblance.
? We leverage the observation that many real applications involve sparse data (i.e., ri = fDi ?
0, but fi /fj = ri /rj may be still significant) to develop a much simplified estimator, which
is desired in practical applications. This assumption of fi /D ? 0 significantly simplifies
the estimator and frees us from having to know the cardinalities fi .
? We analyze the theoretical variance of the simplified estimator and compare it with the
original minwise hashing method (using 64 bits). Our theoretical analysis shows that bbit minwise hashing can normally achieve a 10 to 25-fold improvement in storage space
(for a given estimator accuracy of the 3-way resemblance) when the set similarities are not
extremely low (e.g., when the 3-way resemblance > 0.02). These results are particularly
important for applications in which only detecting high resemblance/overlap is relevant,
such as many data cleaning scenarios or duplicate detection.
The recommended procedure for estimating 3-way resemblances (in sparse data) is shown as Alg. 1.
Algorithm 1 The b-bit minwise hashing algorithm, applied to estimating 3-way resemblances in a
collection of N sets. This procedure is suitable for sparse data, i.e., ri = fi /D ? 0.
Input: Sets Sn ? ? = {0, 1, ..., D ? 1}, n = 1 to N .
Pre-processing phrase:
1) Generate k random permutations ?j : ? ? ?, j = 1 to k.
2) For each set Sn and permutation ?j , store the lowest b bits of min (?j (Sn )), denoted by en,t,?j , t = 1 to b.
Estimation phrase: (Use threensets S1 , S2 , and S3 as an example.)
o
P
Qb
?
?
1) Compute P?12,b = k1 kj=1
t=1 1{e1,t,?j = e2,t,?j } . Similarly, compute P13,b and P23,b .
nQ
o
P
b
2) Compute P?b = k1 kj=1
t=1 1{e1,t,?j = e2,t,?j = e3,t,?j } .
?b =
3) Estimate R by R
?b ?2b (P
?12,b +P
?13,b +P
?23,b )+2
4b P
(2b ?1)(2b ?2)
.
? ij,b =
4) If needed, the 2-way resemblances Rij,b can be estimated as R
?ij,b ?1
2b P
.
2b ?1
2
The Precise Theoretical Probability Analysis
Minwise hashing applies k random permutations ?j : ? ?? ?, ? = {0, 1, ..., D ? 1}, and then
estimates R12 (and similarly other 2-way resemblances) using the following probability:
Pr (min(?j (S1 )) = min(?j (S2 ))) =
|S1 ? S2 |
= R12 .
|S1 ? S2 |
(4)
This method naturally extends to estimating 3-way resemblances for three sets S1 , S2 , S3 ? ?:
Pr (min(?j (S1 )) = min(?j (S2 )) = min(?j (S3 ))) =
|S1 ? S2 ? S3 |
= R.
|S1 ? S2 ? S3 |
(5)
To describe b-bit hashing, we define the minimum values under ? and their lowest b bits to be:
zi = min (? (Si )) ,
ei,t = t-th lowest bit of zi .
To estimate R, we need to computes the empirical estimates of the probabilities Pij,b and Pb , where
?
Pij,b = Pr
b
Y
!
?
1{ei,t = ej,t } = 1 ,
Pb = P123,b = Pr
t=1
b
Y
!
1{e1,t = e2,t = e3,t } = 1 .
t=1
The main theoretical task is to derive Pb . The prior work[35] already derived Pij,b ; see Appendix A.
To simplify the algebra, we assume that D is large, which is virtually always satisfied in practice.
Theorem 1 Assume D is large.
?
Pb = Pr
b
Y
!
1{e1,i = e2,i = e3,i } = 1
i=1
=
Z
Z +s
+R=
,
u
u
(6)
where u = r1 + r2 + r3 ? s12 ? s13 ? s23 + s, and
(r3 ? s13 ? s23 + s)
(r2 ? s12 ? s23 + s)
s12 G12,b + (s13 ? s)A2,b +
s13 G13,b
r1 + r2 ? s12
r1 + r3 ? s13
(r1 ? s12 ? s13 + s)
(r1 ? s12 ? s13 + s)
+(s23 ? s)A1,b +
s23 G23,b + [(r2 ? s23 )A3,b + (r3 ? s23 )A2,b ]
G23,b
r2 + r3 ? s23
r2 + r3 ? s23
(r2 ? s12 ? s23 + s)
+ [(r1 ? s13 )A3,b + (r3 ? s13 )A1,b ]
G13,b
r1 + r3 ? s13
(r3 ? s13 ? s23 + s)
G12,b ,
+ [(r1 ? s12 )A2,b + (r2 ? s12 )A1,b ]
r1 + r2 ? s12
Z =(s12 ? s)A3,b +
Aj,b =
rj (1 ? rj )2
b ?1
1 ? (1 ? rj )2b
,
Gij,b =
(ri + rj ? sij )(1 ? ri ? rj + sij )2
b ?1
1 ? (1 ? ri ? rj + sij )2b
, i, j ? {1, 2, 3}, i 6= j.
Theorem 1 naturally suggests an iterative estimation procedure, by writing Eq. (6) as s = Pb u ? Z.
D = 216
0.56
Pb
0.54
0.58
2 bits
3 bits
4 bits
Theoretical
0.52
0.54
0.52
0.5 b = 2
0.5
0.48 b = 3
0.46 b = 4
0
100
0.48
200 300 400
Sample size k
500
b=2
0.56
Pb
0.58
0.46
0
2 bits
3 bits
4 bits
Theoretical
b=3
b=4
20
D=2
100
200 300 400
Sample size k
500
Figure 2: Pb , for verifying the probability formula in Theorem 1. The empirical estimates and the
theoretical predictions essentially overlap regardless of the sparsity measure ri = fi /D.
A Simulation Study
For the purpose of verifying Theorem 1, we use three sets corresponding
to the occurrences of three common words (?OF?, ?AND?, and ?OR?) in a chunk of real world Web
crawl data. Each (word) set is a set of document (Web page) IDs which contained that word at least
once. The three sets are not too sparse and D = 216 suffices to represent their elements. The ri = fDi
values are 0.5697, 0.5537, and 0.3564, respectively. The true 3-way resemblance is R = 0.47.
We can also increase D by mapping these sets into a larger space using a random mapping, with
D = 216 , 218 , 220 , or 222 . When D = 222 , the ri values are 0.0089, 0.0087, 0.0056.
Fig. 2 presents the empirical estimates of the probability Pb , together with the theoretical predictions
by Theorem 1. The empirical estimates essentially overlap the theoretical predictions. Even though
the proof assumes D ? ?, D does not have to be too large for Theorem 1 to be accurate.
3 The Much Simplified Estimator for Sparse Data
The basic probability formula (Theorem 1) we derive could be too complicated for practical use. To
obtain a simpler formula, we leverage the observation that in practice we often have ri = fDi ? 0,
even though both fi and D can be very large. For example, consider web duplicate detection [17].
Here, D = 264 , which means that even for a web page with fi = 254 shingles (corresponding to the
text of a small novel), we still have fDi ? 0.001. Note that, even when ri ? 0, the ratios, e.g., rr12 ,
can be still large. Recall the resemblances (2) and (3) are only determined by these ratios.
We analyzed the distribution of fDi using two real-life datasets: the UCI dataset containing 3 ? 105
NYTimes articles; and a Microsoft proprietary dataset with 106 news articles [19]. For the UCINYTimes dataset, each document was already processed as a set of single words. For the anonymous
dataset, we report results using three different representations: single words (1-shingle), 2-shingles
(two contiguous words), and 3-shingles. Table 1 reports the summary statistics of the fDi values.
Table 1: Summary statistics of the
Data
3 ? 105 UCI-NYTimes articles
106 Microsoft articles (1-shingle)
106 Microsoft articles (2-shingle)
106 Microsoft articles (3-shingle)
fi
D
values in two datasets
Median
0.0021
0.00027
0.00003
0.00002
Mean
0.0022
0.00032
0.00004
0.00002
Std.
0.0011
0.00023
0.00005
0.00002
For truly large-scale applications, prior studies [3, 4, 17] commonly used 5-shingles. This means
that real world data may be significantly more sparse than the values reported in Table 1.
3.1
The Simplified Probability Formula and the Practical Estimator
Theorem 2 Assume D is large. Let T = R12 + R13 + R23 . As r1 , r2 , r3 ? 0,
?
Pb = Pr
b
Y
i=1
!
1{e1,i = e2,i = e3,i } = 1
=
o
1 n b
(2 ? 1)(2b ? 2)R + (2b ? 1)T + 1 .
b
4
(7)
Interestingly, if b = 1, then P1 = 14 (1 + T ), i.e., no information about the 3-way resemblance R is
contained. Hence, it is necessary to use b ? 2 to estimate 3-way similarities.
Alg. 1 uses P?b and P?ij,b to respectively denote the empirical estimates of the theoretical probabilities
? b , is
Pb and Pij,b . Assuming r1 , r2 , r3 ? 0, the proposed estimator of R, denoted by R
?
?
4b P?b ? 2b P?12,b + P?13,b + P?23,b + 2
?b =
R
.
(8)
(2b ? 1)(2b ? 2)
? b in (8) is unbiased with the variance
Theorem 3 Assume D is large and r1 , r2 , r3 ? 0. Then R
n
?
?
o
? ?
1
b
b
b
b
b
2
?b = 1
1
+
(2
?
3)T
+
4
?
6
?
2
+
10
R
?
(2
?
1)(2
?
2)R
.
V ar R
k (2b ? 1)(2b ? 2)
(9)
It is interesting to examine several special cases:
? 1 ) = ?, i.e., one must use b ? 2.
? b = 1: V ar(R
?
?
? 2 ) = 1 1 + T + 2R ? 6R2 .
? b = 2: V ar(R
6k
? ? ) = 1 R(1 ? R) = V ar(R
? M ). R
? M is the original minwise hashing esti? b = ?: V ar(R
k
? M requires an infinite precision
mator for 3-way resemblance. In principle, the estimator R
? M ) and V ar(R
? 64 ) are indistinguishable.
(i.e., b = ?). Numerically, V ar(R
3.2
Simulations for Validating Theorem 3
We now present a simulation study for verifying Theorem 3, using the same three sets used in Fig. 2.
? b )?Rb . Fig. 4 presents the empirical mean square
Fig. 3 presents the resulting empirical biases: E(R
2
? b ) in Theorem 3.
errors (MSE = bias +variance) together with the theoretical variances V ar(R
M
?3
0.05
0.01
?0.05
0
b=4
?0.02
?0.1
0
100
?0.03
0
500
100
0
M
4
?5
3
?5
b=2
3
b=2
b=3
b=2
200 300 400
Sample size k
D = 222
M
Bias
?0.01
b=2
x 10
b=4
M
b=3
5
D = 220
Bias
Bias
Bias
0
M
b=4
?3
x 10
D=2
D=2
0
5
18
16
200 300 400
Sample size k
?10
0
500
100
200 300 400
Sample size k
?10
0
500
100
200 300 400
Sample size k
500
? b (8). We used 3 (word) sets: ?OF?, ?AND?, and ?OR? and four D values: 216 ,
Figure 3: Bias of R
218 , 220 , and 222 . We conducted experiments using b = 2, 3, and 4 as well as the original minwise
hashing (denoted by ?M?). The plots verify that as ri decreases (to zero), the biases vanish. Note
that the set sizes fi remain the same, but the relative values ri = fDi decrease as D increases.
b=3
?3
10
10
2 bits
3 bits
4 bits
minwise
Theoretical
100
Sample size k
b=4
M
500
D = 218
?2
3
b=2
10
?3
10
10
2 bits
3 bits
4 bits
minwise
Theoretical
100
Sample size k
4
M
3
500
?1
10
D = 220
3
b=2
?2
10
?3
10
10
2 bits
3 bits
4 bits
minwise
Theoretical
100
Sample size k
4
M
500
Mean square error (MSE)
b=2
?2
10
?1
10
Mean square error (MSE)
?1
D = 216
Mean square error (MSE)
Mean square error (MSE)
?1
10
10
D = 222
3
?2
b=2
10
?3
10
10
2 bits
3 bits
4 bits
minwise
Theoretical
4
M
100
Sample size k
500
? b (8). The solid curves are the empirical MSEs (=var+bias2 ) and the dashed
Figure 4: MSE of R
lines are the theoretical variances (9), under the assumption of ri ? 0. Ideally, we would like to see
the solid and dashed lines overlap. When D = 220 and D = 222 , even though the ri values are not
too small, the solid and dashed lines almost overlap. Note that, at the same sample size k, we always
? 2 ) > V ar(R
? 3 ) > V ar(R
? 4 ) > V ar(R
? M ), where R
? M is the original minwise hashing
have V ar(R
?
?
? M ).
estimator. We can see that, V ar(R3 ) and V ar(R4 ) are very close to V ar(R
We can summarize the results in Fig. 3 and Fig. 4 as follows:
? When the ri = fDi values are large (e.g., ri ? 0.5 when D = 216 ), the estimates using
(8) can be noticeably biased. The estimation biases diminish as the ri values decrease. In
fact, even when the ri values are not small (e.g., ri ? 0.05 when D = 220 ), the biases are
already very small (roughly 0.005 when D = 220 ).
? The variance formula (9) becomes accurate when the ri values are not too large. For example, when D = 218 (ri ? 0.1), the empirical MSEs largely overlap the theoretical variances
which assumed ri ? 0, unless the sample size k is large. When D = 220 (and D = 222 ),
the empirical MSEs and theoretical variances overlap.
? For real applications, as we expect D will be very large (e.g., 264 ) and the ri values (fi /D)
will be very small, our proposed simple estimator (8) will be very useful in practice, because it becomes unbiased and the variance can be reliably predicted by (9).
4
Improving Estimates for Dense Data Using Theorem 1
While we believe the simple estimator in (8) and Alg. 1 should suffice in most applications, we
demonstrate here that the sparsity assumption of ri ? 0 is not essential if one is willing to use the
more sophisticated estimation procedure provided by Theorem 1.
By Eq. (6), s = Pb u ? Z, where Z contains s, sij , ri etc. We first estimate sij (from the estimated
Rij ) using the precise formula for the two-way case; see Appendix A. We then iteratively solve for
? b in (8). Usually a few iterations suffice.
s using the initial guess provided by the estimator R
Fig. 5 reports the bias (left most panel, only for D = 216 ) and MSE, corresponding to Fig. 3 and
Fig. 4. In Fig. 5, the solid curves are obtained using the precise estimation procedure by Theorem 1.
? b which assumes ri ? 0.
The dashed curves are the estimates using the simplified estimator R
Even when the data are not sparse, the precise estimation procedure provides unbiased estimates
as verified by the leftmost panel of Fig. 5. Using the precise procedure results in noticeably more
accurate estimates in non-sparse data, as verified by the second panel of Fig. 5. However, as long as
? b in (8) is accurate.
the data are reasonably sparse (the right two panels), the simple estimator R
?1
0.5
b=3
0
b=2
?0.5
?1
0
100
200 300 400
Sample size k
?1
10
Mean square error (MSE)
D = 216
Bias
Bias
Mean square error (MSE)
x 10
D = 216
b=2
?2
10
b=3
b=2
b=3
?3
10
500
10
100
Sample size k
500
?1
10
D = 218
?2
10
b=2
b=3
b=3
?3
10
b=2
10
100
Sample size k
Mean square error (MSE)
?3
1
500
10
D = 220
?2
10
b=2
b=3
?3
10
10
100
Sample size k
500
Figure 5: The bias (leftmost panel) and MSE of the precise estimation procedure, using the same
data used in Fig. 3 and Fig. 4. The dashed curves correspond to the estimates using the simplified
? b in (8) which assumes ri ? 0.
estimator R
5 Quantifying the Improvements Using b-Bit Hashing
This section is devoted to analyzing the improvements of b-bit minwise hashing, compared to using
64 bits for each hashed value. Throughout the paper, we use the terms ?sample? and ?sample size?
(denoted by k). The original minwise hashing stores each ?sample? using 64 bits (as in [17]). For
? 64 ) and V ar(R
?M )
b-bit minwise hashing, we store each ?sample? using b bits only. Note that V ar(R
(the variance of the original minwise hashing) are numerically indistinguishable.
As we decrease b, the space needed for each sample will be smaller; the estimation variance at
the same sample?size?k, however, will increase. This variance-space trade-off can be quantified by
? b ? k, which is called the storage factor. Lower B(b) is more desirable. The
B(b) = b ? Var R
ratio
B(64)
B(b)
precisely characterizes the improvements of b-bit hashing compared to using 64 bits.
20
b=3
15
b=4
10
b=6
T = 3R
5
0
0
0.2
0.4
0.6
0.8
1
20
15
b=2
10
Storage ratio ( B(64) / B(b) )
Storage ratio B(64) / B(b)
b=4
10
8
b=6
b=3
6
4
b=2
T = 10R
2
0
0
0.05
0.1
B(64)
B(b) ,
0.15
R
0.2
0.25
0.3
b=4
b=6
5
T = 4R
0
0
R
12
b=2
b=3
b=6
8
b=4
b=3
4
2
0
0
15
b=2
T = 20R
0.05
0.1
R
0.15
b=2
b=3
4
b=4
b=6
10
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
R
10
6
20
Storage ratio B(64) / B(b)
b=2
25
5
b=2
T = 6R
0
0
0.1
0.2
0.3
0.4
0.5
R
Storage ratio ( B(64) / B(b) )
30
Storage ratio B(64) / B(b)
Storage ratio ( B(64) / B(b) )
Fig. 6 confirms the substantial improvements of b-bit hashing over the original minwise hashing
using 64 bits. The improvements in terms of the storage space are usually 10 (or 15) to 25-fold
when the sets are reasonably similar (i.e., when the 3-way resemblance > 0.1). When the three sets
are very similar (e.g., the top left panel), the improvement will be even 25 to 30-fold.
7
6
b=6
5
4
b=4
3
b=3
2
1
0
0
b=2
T = 50R
0.01 0.02 0.03 0.04 0.05 0.06
R
the relative storage improvement of using b = 2, 3, 4, 6 bits, compared to using 64
Figure 6:
bits. Since the variance (9) contains both R and T = R12 + R13 + R23 , we compare variances using
different T /R ratios. As 3R ? T always, we let T = ?R, for some ? ? 3. Since T ? 3, we know
R ? 3/?. Practical applications are often interested in cases with reasonably large R values.
6
Evaluation of Accuracy
We conducted a duplicate detection experiment on a public (UCI) collection of 300,000 NYTimes
news articles. The task is to identify 3-groups with 3-way resemblance R exceeding a threshold R0 .
We used a subset of the data; the total number of 3-groups is about one billion. We experimented
with b = 2, 4 and the original minwise hashing. Fig. 7 presents the precision curves for a representative set of thresholds R0 ?s. Just like in [35], the recall curves are not shown because they could not
differentiate estimators. These curves confirm the significant improvement of using b-bit minwise
hashing when the threshold R0 is quite high (e.g., 0.3). In fact, when R0 = 0.3, using b = 4 resulted in similar precisions as using the original minwise hashing (i.e., a 64/4=16-fold reduction in
storage). Even when R0 = 0.1, using b = 4 can still achieve similar precisions as using the original
minwise hashing by only slightly increasing the sample size k.
1
1
1
M
0.8
b=4
0.6
b=2
0.4
R0 = 0.1
0.2
0.6
0.8
M
b=4
Precision
M
Precision
Precision
0.8
b=2
0.4
R0 = 0.2
0.2
0.6
b=2
0.4
0.2
R0 = 0.3
4
M
0
0
100
200
300
400
Sample size k
0
0
500
100
200
300
400
Sample size k
0
0
500
100
200
300
400
Sample size k
500
Figure 7: Precision curves on the UCI collection of news data. The task is to retrieve news article
3-groups with resemblance R ? R0 . For example, consider R0 = 0.2. To achieve a precision of
at least 0.8, 2-bit hashing and 4-bit hashing require about k = 500 samples and k = 260 samples
respectively, while the original minwise hashing (denoted by M ) requires about 170 samples.
7 Conclusion
Computing set similarities is fundamental in many applications. In machine learning, highdimensional binary data are common and are equivalent to sets. This study is devoted to simultaneously estimating 2-way and 3-way similarities using b-bit minwise hashing. Compared to the
prior work on estimating 2-way resemblance [35], the extension to 3-way is important for many
application scenarios (as described in Sec. 1) and is technically non-trivial.
For estimating 3-way resemblance, our analysis shows that b-bit minwise hashing can normally
achieve a 10 to 25-fold improvement in the storage space required for a given estimator accuracy,
when the set similarities are not extremely low (e.g., 3-way resemblance > 0.02). Many applications
such as data cleaning and de-duplication are mainly concerned with relatively high set similarities.
For many practical applications, the reductions in storage directly translate to improvements in processing speed as well, especially when memory latency is the main bottleneck, which, with the
advent of many-core processors, is more and more common.
Future work: We are interested in developing a b-bit version for Conditional Random Sampling
(CRS) [31, 32, 33], which requires only one permutation (instead of k permutations) and naturally
extends to non-binary data. CRS is also provably more accurate than minwise hashing for binary
data. However, the analysis for developing the b-bit version of CRS appears to be very difficult.
A Review of b-Bit Minwise Hashing for 2-Way Resemblance
Theorem 4 ([35]) Assume D is large.
?
where
C1,b
!
b
Y
P12,b = Pr
1 {e1,i = e2,i } = 1 = C1,b + (1 ? C2,b ) R12
r2 i=1
r1
r1
r2
= A1,b
+ A2,b
, C2,b = A1,b
+ A2,b
,
r1 + r2
r1 + r2
r1 + r2
r1 + r2
b
A1,b =
r1 [1 ? r1 ]2
b
?1
1 ? [1 ? r1 ]
2b
,
A2,b =
r2 [1 ? r2 ]2
?1
b
1 ? [1 ? r2 ]2
.
2 P?
?1
12
If r1 , r2 ? 0, P12,b = 1+(2 2?1)R
and one can estimate R12 by 212,b
, where P?12,b is the
b
b ?1
?
empirical observation of P12,b . If r1 , r2 are not small, R12 is estimated by (P12,b ?C1,b )/(1?C2,b ).
b
b
References
[1] S. Agarwal, J. Lim, L. Zelnik-Manor, P. Perona, D. Kriegman, and S. Belongie. Beyond pairwise clustering. In CVPR, 2005.
[2] M. Bendersky and W. B. Croft. Finding text reuse on the web. In WSDM, pages 262?271, Barcelona, Spain, 2009.
[3] A. Z. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21?29,
Positano, Italy, 1997.
[4] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig. Syntactic clustering of the web. In WWW, pages 1157 ? 1166, Santa Clara,
CA, 1997.
[5] G. Buehrer and K. Chellapilla. A scalable pattern mining approach to web graph compression with communities. In WSDM, pages
95?106, Stanford, CA, 2008.
[6] O. Chapelle, P. Haffner, and V. N. Vapnik. Support vector machines for histogram-based image classification. 10(5):1055?1064, 1999.
[7] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pages 380?388, Montreal, Quebec, Canada, 2002.
[8] S. Chaudhuri. An Overview of Query Optimization in Relational Systems. In PODS, pages 34?43, 1998.
[9] S. Chaudhuri, V. Ganti, and R. Kaushik. A primitive operatior for similarity joins in data cleaning. In ICDE, 2006.
[10] S. Chaudhuri, V. Ganti, and R. Motwani. Robust identification of fuzzy duplicates. In ICDE, pages 865?876, Tokyo, Japan, 2005.
[11] F. Chierichetti, R. Kumar, S. Lattanzi, M. Mitzenmacher, A. Panconesi, and P. Raghavan. On compressing social networks. In KDD,
pages 219?228, Paris, France, 2009.
[12] K. Church. Approximate lexicography and web search. International Journal of Lexicography, 21(3):325?336, 2008.
[13] K. Church and P. Hanks. Word association norms, mutual information and lexicography. Computational Linguistics, 16(1):22?29, 1991.
[14] E. Cohen, M. Datar, S. Fujiwara, A. Gionis, P. Indyk, R. Motwani, J. D. Ullman, and C. Yang. Finding interesting associations without
support pruning. IEEE Trans. on Knowl. and Data Eng., 13(1), 2001.
[15] F. Diaz. Integration of News Content into Web Results. In WSDM, 2009.
[16] Y. Dourisboure, F. Geraci, and M. Pellegrini. Extraction and classification of dense implicit communities in the web graph. ACM Trans.
Web, 3(2):1?36, 2009.
[17] D. Fetterly, M. Manasse, M. Najork, and J. L. Wiener. A large-scale study of the evolution of web pages. In WWW, pages 669?678,
Budapest, Hungary, 2003.
[18] G. Forman, K. Eshghi, and J. Suermondt. Efficient detection of large-scale redundancy in enterprise file systems. SIGOPS Oper. Syst.
Rev., 43(1):84?91, 2009.
[19] M. Gamon, S. Basu, D. Belenko, D. Fisher, M. Hurst, and A. C. K?onig. Blews: Using blogs to provide context for news articles. In AAAI
Conference on Weblogs and Social Media, 2008.
[20] H. Garcia-Molina, J. D. Ullman, and J. Widom. Database Systems: the Complete Book. Prentice Hall, New York, NY, 2002.
[21] A. Gionis, D. Gunopulos, and N. Koudas. Efficient and tunable similar set retrieval. In SIGMOD, pages 247?258, CA, 2001.
[22] S. Gollapudi and A. Sharma. An axiomatic approach for result diversification. In WWW, pages 381?390, Madrid, Spain, 2009.
[23] M. Hein and O. Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136?143,
Barbados, 2005.
[24] P. Indyk and R. Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604?613,
Dallas, TX, 1998.
[25] Y. E. Ioannidis. The history of histograms (abridged). In VLDB, 2003.
[26] Y. Jiang, C. Ngo, and J. Yang. Towards optimal bag-of-features for object categorization and semantic video retrieval. In CIVR, pages
494?501, Amsterdam, Netherlands, 2007.
[27] N. Jindal and B. Liu. Opinion spam and analysis. In WSDM, pages 219?230, Palo Alto, California, USA, 2008.
[28] K. Kalpakis and S. Tang. Collaborative data gathering in wireless sensor networks using measurement co-occurrence. Computer
Communications, 31(10):1979?1992, 2008.
[29] A. C. K?onig, M. Gamon, and Q. Wu. Click-Through Prediction for News Queries. In SIGIR, 2009.
[30] H. Lee, R. T. Ng, and K. Shim. Power-law based estimation of set similarity join size. In PVLDB, 2009.
[31] P. Li and K. W. Church. A sketch algorithm for estimating two-way and multi-way associations. Computational Linguistics, 33(3):305?
354, 2007 (Preliminary results appeared in HLT/EMNLP 2005).
[32] P. Li, K. W. Church, and T. J. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, pages
873?880, Vancouver, BC, Canada, 2006.
[33] P. Li, K. W. Church, and T. J. Hastie. One sketch for all: Theory and applications of conditional random sampling. In NIPS, Vancouver,
BC, Canada, 2008.
[34] P. Li, T. J. Hastie, and K. W. Church. Improving random projections using marginal information. In COLT, pages 635?649, Pittsburgh,
PA, 2006.
[35] P. Li and A. C. K?onig. b-bit minwise hashing. In WWW, pages 671?680, Raleigh, NC, 2010.
[36] Ludmila, K. Eshghi, C. B. M. III, J. Tucek, and A. Veitch. Probabilistic frequent itemset mining in uncertain databases. In KDD, pages
1087?1096, Paris, France, 2009.
[37] G. S. Manku, A. Jain, and A. D. Sarma. Detecting Near-Duplicates for Web-Crawling. In WWW, Banff, Alberta, Canada, 2007.
[38] C. D. Manning and H. Schutze. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, MA, 1999.
[39] M. Najork, S. Gollapudi, and R. Panigrahy. Less is more: sampling the neighborhood graph makes salsa better and faster. In WSDM,
pages 242?251, Barcelona, Spain, 2009.
[40] S. Sarawagi and A. Kirpal. Efficient set joins on similarity predicates. In SIGMOD, pages 743?754, 2004.
[41] T. Urvoy, E. Chauveau, P. Filoche, and T. Lavergne. Tracking web spam with html style similarities. ACM Trans. Web, 2(1):1?28, 2008.
[42] X. Wang and C. Zhai. Mining term association patterns from search logs for effective query reformulation. In CIKM, pages 479?488,
Napa Valley, California, USA, 2008.
[43] D. Zhou, J. Huang, and B. Sch?olkopf. Beyond pairwise classification and clustering using hypergraphs. 2006.
| 4119 |@word version:2 compression:2 norm:1 r13:4 disk:1 widom:1 willing:1 confirms:1 simulation:3 zelnik:1 vldb:1 covariance:1 eng:1 solid:4 reduction:2 initial:1 liu:1 contains:4 tuned:1 document:8 interestingly:2 bc:2 current:1 ganti:2 si:2 clara:1 conjunctive:1 crawling:2 must:1 suermondt:1 kdd:2 christian:1 plot:1 guess:1 item:5 nq:1 pvldb:1 core:1 record:1 detecting:2 quantized:1 provides:1 banff:1 simpler:1 c2:3 enterprise:1 become:1 overhead:1 pairwise:10 expected:1 roughly:1 p1:1 examine:1 multi:6 wsdm:5 alberta:1 curse:1 cardinality:1 increasing:1 becomes:3 provided:2 estimating:15 underlying:1 moreover:2 notation:4 alto:1 advent:1 lowest:5 medium:2 suffice:2 panel:6 fuzzy:1 finding:2 corporation:1 impractical:1 esti:1 onig:4 normally:4 before:1 positive:1 local:1 treat:1 dallas:1 gunopulos:1 despite:1 id:2 analyzing:1 jiang:1 datar:1 itemset:1 quantified:1 r4:1 suggests:1 challenging:1 co:3 limited:1 ms:3 practical:5 fujiwara:1 practice:3 definite:1 sarawagi:1 procedure:8 empirical:12 significantly:5 g13:2 projection:5 word:15 pre:2 close:1 selection:3 valley:1 storage:15 context:5 impossible:1 writing:1 prentice:1 www:5 equivalent:1 demonstrated:1 primitive:1 regardless:1 pod:1 focused:1 sigir:1 estimator:23 rule:1 retrieve:1 a13:4 commercial:1 cleaning:6 us:1 geraci:1 element:1 pa:1 particularly:1 std:1 database:5 rij:5 verifying:3 wang:1 thousand:1 compressing:1 news:8 decrease:4 trade:1 nytimes:3 substantial:1 complexity:1 ideally:1 manasse:2 kriegman:1 tight:1 algebra:1 technically:2 f2:5 represented:2 various:4 tx:1 derivation:1 distinct:1 fast:2 describe:1 jain:1 effective:1 query:10 abridged:1 outcome:1 choosing:1 neighborhood:1 whose:1 quite:1 larger:2 widely:1 solve:2 cvpr:1 stanford:1 koudas:1 statistic:2 syntactic:1 final:1 indyk:2 differentiate:1 sequence:1 interaction:2 product:2 frequent:1 relevant:2 uci:4 budapest:1 hungary:1 translate:1 chaudhuri:3 achieve:5 gollapudi:2 olkopf:1 billion:1 motwani:3 r1:26 categorization:1 ludmila:1 object:1 help:1 derive:3 develop:2 fdi:8 montreal:1 nearest:1 ij:3 eq:2 entirety:1 predicted:1 spilled:1 closely:1 tokyo:1 raghavan:1 a12:5 opinion:1 public:1 noticeably:2 require:1 f1:4 clustered:1 suffices:1 anonymous:1 preliminary:1 ultra:1 civr:1 extension:2 weblogs:1 sufficiently:1 diminish:1 hall:1 normal:1 pellegrini:1 mapping:2 urvoy:1 major:1 a2:6 purpose:1 estimation:12 axiomatic:1 bag:1 s12:15 knowl:1 palo:1 sensitive:1 vice:1 successfully:1 mit:1 sensor:1 always:3 aim:1 manor:1 rather:1 zhou:1 ej:1 cornell:2 cr:3 derived:1 focus:1 improvement:12 mainly:1 schutze:1 collocation:1 perona:1 relation:1 france:2 interested:3 provably:1 among:1 classification:3 colt:1 denoted:5 hilbertian:1 html:1 plan:2 integration:1 yip:1 special:2 mutual:1 marginal:1 once:2 f3:4 having:1 ng:1 sampling:5 extraction:1 future:1 report:5 recommend:1 simplify:1 duplicate:12 serious:1 few:1 simultaneously:1 resulted:1 microsoft:7 gui:1 detection:8 organization:1 mining:7 highly:1 evaluation:2 adjust:1 analyzed:1 truly:1 spain:3 devoted:2 s123:2 accurate:6 necessary:1 sigops:1 unless:1 desired:1 hein:1 theoretical:18 p13:1 uncertain:1 instance:1 column:1 contiguous:2 ar:17 phrase:2 subset:1 predicate:1 rounding:1 conducted:2 too:7 reported:1 chunk:1 density:1 fundamental:2 broder:2 international:1 probabilistic:2 off:1 lee:1 barbados:1 together:2 quickly:1 intersecting:1 again:1 central:1 satisfied:1 aaai:1 containing:1 huang:1 emnlp:1 book:1 style:1 ullman:2 li:6 japan:1 s13:15 oper:1 syst:1 de:1 sec:1 gionis:2 depends:1 later:1 lot:1 analyze:1 characterizes:1 complicated:3 contribution:1 minimize:1 square:8 collaborative:1 accuracy:6 wiener:1 variance:14 conducting:1 efficiently:1 g23:2 mator:1 largely:1 correspond:1 identify:1 identification:2 shingle:11 processor:1 history:1 ping:1 hlt:1 dm:1 naturally:5 associated:1 e2:6 proof:1 dataset:4 tunable:1 recall:2 lim:1 dimensionality:1 routine:1 sophisticated:1 abscence:1 appears:1 hashing:40 higher:3 mitzenmacher:1 though:4 hank:1 just:1 implicit:1 r23:4 sketch:3 web:18 ei:2 quality:1 aj:1 resemblance:31 believe:1 name:1 usa:2 verify:1 true:1 unbiased:3 evolution:1 hence:1 iteratively:1 semantic:1 indistinguishable:2 during:1 kaushik:1 leftmost:2 complete:1 demonstrate:1 p12:4 fj:3 image:3 wise:1 invoked:1 novel:1 recently:1 fi:13 wikipedia:1 common:4 overview:1 cohen:1 association:11 extend:1 hypergraphs:1 numerically:2 significant:2 measurement:1 versa:1 cambridge:1 similarly:2 language:1 chapelle:1 hashed:3 similarity:20 etc:2 base:1 multivariate:1 sarma:1 italy:1 belongs:1 scenario:5 store:7 selectivity:1 diversification:1 salsa:1 binary:7 onr:1 blog:1 life:1 molina:1 minimum:1 filoche:1 employed:1 prune:1 r0:10 determine:1 sharma:1 recommended:1 dashed:5 desirable:2 rj:8 reduces:1 manku:1 faster:1 long:1 retrieval:3 zweig:1 e1:6 a1:6 impact:1 prediction:4 scalable:1 basic:3 essentially:2 metric:1 iteration:1 represent:1 histogram:2 kernel:1 agarwal:1 achieved:1 c1:3 addition:1 median:1 sch:1 biased:1 file:1 duplication:1 virtually:1 validating:1 quebec:1 ngo:1 near:1 hurst:1 presence:2 leverage:3 revealed:1 intermediate:4 concerned:2 yang:2 variety:1 affect:1 iii:1 zi:2 hastie:3 click:1 inner:1 simplifies:1 haffner:1 panconesi:1 bottleneck:1 url:1 reuse:1 e3:4 york:1 proprietary:1 useful:1 latency:2 santa:1 involve:1 netherlands:1 amount:1 processed:2 pagehits:2 generate:1 exist:1 nsf:1 r12:9 s3:16 sign:1 estimated:3 cikm:1 rb:1 diaz:1 group:4 redundancy:1 four:2 reformulation:2 threshold:4 pb:12 verified:2 a23:4 graph:3 icde:2 convert:1 bias2:1 extends:2 almost:1 throughout:1 wu:1 s23:14 decision:1 appendix:2 jaccard:1 bit:57 bound:1 display:2 fold:6 occur:1 precisely:1 ri:29 bousquet:1 speed:2 extremely:2 min:7 qb:1 kumar:1 relatively:1 charikar:1 developing:2 combination:2 manning:1 describes:1 remain:1 smaller:1 slightly:1 rev:1 s1:19 gamon:2 sij:7 pr:7 gathering:1 turn:1 r3:17 mechanism:1 needed:2 know:2 adopted:1 operation:3 triadic:1 occurrence:4 original:12 top:2 clustering:5 linguistics:3 assumes:3 lexicography:3 sigmod:2 k1:2 especially:2 already:3 occurs:1 wenhao:1 traditional:1 affinity:1 distance:2 veitch:1 najork:2 trivial:2 p23:1 assuming:1 panigrahy:1 index:1 retained:1 zhai:1 ratio:10 nc:1 difficult:1 potentially:1 stoc:2 reliably:1 perform:1 vertical:5 observation:3 datasets:2 relational:1 communication:1 precise:7 community:2 canada:4 pair:2 required:3 paris:2 glassman:1 engine:3 california:2 barcelona:2 nip:3 trans:3 forman:1 beyond:2 suggested:1 usually:2 pattern:2 appeared:1 sparsity:2 summarize:1 reliable:1 memory:3 including:1 video:1 power:1 suitable:2 overlap:9 ranked:1 natural:3 critical:1 representing:1 improve:1 chauveau:1 church:6 sn:3 kj:2 text:5 prior:7 review:1 vancouver:2 relative:2 law:1 expect:1 permutation:6 ioannidis:1 shim:1 suggestion:1 limitation:2 interesting:2 var:2 foundation:1 degree:1 pij:4 article:9 principle:1 systematically:1 storing:2 summary:2 supported:1 wireless:1 free:1 aij:3 bias:14 raleigh:1 basu:1 simhash:3 neighbor:1 sparse:11 overcome:1 dimension:1 curve:8 world:2 crawl:1 computes:1 commonly:2 collection:3 simplified:7 spam:2 social:2 pruning:5 approximate:2 confirm:1 arnd:1 corpus:4 containment:1 assumed:2 belongie:1 pittsburgh:1 search:11 iterative:1 eshghi:2 table:3 reasonably:3 robust:1 ca:3 improving:2 alg:3 mse:11 complex:2 aistats:1 main:5 dense:2 s2:19 positano:1 identifier:1 lattanzi:1 fetterly:1 fig:17 representative:1 join:5 en:1 madrid:1 chierichetti:1 ny:1 precision:9 exceeding:1 n000140910911:1 candidate:1 vanish:1 croft:1 tang:1 formula:8 theorem:16 removing:1 chellapilla:1 r2:28 experimented:1 svm:1 evidence:1 a3:3 essential:1 vapnik:1 locality:1 intersection:12 garcia:1 simply:1 expressed:2 contained:2 amsterdam:1 tracking:1 applies:1 corresponds:1 acm:2 ma:1 conditional:3 viewed:1 sized:1 g12:2 quantifying:1 towards:3 fisher:1 content:2 considerable:1 determined:2 infinite:1 total:3 gij:1 called:1 rarely:1 select:1 allotted:1 highdimensional:1 support:2 minwise:35 dept:2 |
3,445 | 412 | Computing with Arrays of Bell-Shaped and
Sigmoid Functions
Pierre Baldi?
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109
Abstract
We consider feed-forward neural networks with one non-linear hidden layer
and linear output units. The transfer function in the hidden layer are either bell-shaped or sigmoid. In the bell-shaped case, we show how Bernstein polynomials on one hand and the theory of the heat equation on the
other are relevant for understanding the properties of the corresponding
networks. In particular, these techniques yield simple proofs of universal
approximation properties, i.e. of the fact that any reasonable function can
be approximated to any degree of precision by a linear combination of bellshaped functions. In addition, in this framework the problem of learning
is equivalent to the problem of reversing the time course of a diffusion process. The results obtained in the bell-shaped case can then be applied to
the case of sigmoid transfer functions in the hidden layer, yielding similar
universality results. A conjecture related to the problem of generalization
is briefly examined.
1
INTRODUCTION
Bell-shaped response curves are commonly found in biological neurons whenever a
natural metric exist on the corresponding relevant stimulus variable (orientation,
position in space, frequency, time delay, ... ). As a result, they are often used in
neural models in different context ranging from resolution enhancement and interpolation to learning (see, for instance, Baldi et al. (1988), Moody et al. (1989)
*and Division of Biology, California Institute of Technology. The complete title of
this paper should read: "Computing with arrays of bell-shaped and sigmoid functions.
Bernstein polynomials, the heat equation and universal approximation properties".
735
736
Baldi
and Poggio et al. (1990)). Consider then the problem of approximating a function
y
I(x) by a weighted sum of bell-shaped functions B(k, x), i. e. of finding a
suitably good set of weights H(k) satisfying
=
I(x) ~
L H(k)B(k, x).
(1)
k
In neural network terminology, this corresponds to using a feed-forward network
with a unique hidden layer of bell-shaped units and a linear ouput layer. In this note,
we first briefly point out how this question is related to two different mathematical
concepts: Bernstein Polynomials on one hand and the Heat Equation on the other.
The former shows how such an approximation is always possible for any reasonable
function whereas through the latter the problem of learning, that is of finding H(k),
is equivalent to reversing the time course of a diffusion process. For simplicity, the
relevant ideas are presented in one dimension. However, the extension to the general
setting is straightforward and will be sketched in each case. We then indicate how
these ideas can be applied to similar neural networks with sigmoid transfer functions
in the hidden layer. A conjecture related to the problem of generalization is briefly
examined.
2
BERNSTEIN POLYNOMIALS
In this section, without any loss of generality, we assume that all the functions to
be considered are defined over the interval [0,1]. For a fixed integer n, there are n
Bernstein polynomials of degree n (see, for instance, Feller (1971)) given by
Bn(k, x)
= (~) xk(l -
xt- k .
(2)
Bn(k, x) can be interpreted as being the probability of having k successes in a coin
flipping experiment of duration n, where x represents the probability of a single
success. It is easy to see that Bn(k, x) is bell-shaped and reaches its maximum
for x = kin. Can we then approximate a function I using linear combinations
of Bernstein polynomials of degree n? Let us first consider, as an example, the
simple case of the identity function I(x)
x (x E [0,1]). If we interpret x as the
probability of success on a single coin toss, then the expected number of successes
in n trials is given by
=
(3)
or equivalently
(4)
The remarkable theorem of Bernstein is that (4) remains approximately true for a
general function I. More precisely:
Theorem: Assume
I
is a bounded function defined over the interval [0,1]. Then
.'i.':!,
i:t(;) G)
k=O
x>(1 - xr> = J(x)
(5)
Computing with Arrays of Bell-Shaped and Sigmoid functions
at any point x where 1 is continuous. Moreover, if 1 is continuous everywhere, the
sequence in (5) approaches 1 uniformly.
Proof: The proof is beautiful and elementary. It is easy to see that
for any 0 < 6 < 1. To bound the first term in the right hand side of this inequality,
we use the fact that for fixed f and for n large enough, at a point of continuity x,
we can find a 6 such that I/(x) - I(~)I < f as soon as Ix - ~I < 6. Thus the first
term is bounded by f. If 1 is continuous everywhere, then it is uniformly continuous
and 6 can be found independently of x. For the second term, since 1 is bounded
(1/(x)1 5 M), we have I/(x) - I(~)I 5 2l'vf. Now we use Tchebycheff inequality
(P(IX - E{X)I ~ a) ::; (VarX)/a 2 ) to bound the tail of the binomial series
I
~
~
(n)
k x
k(1_ )n-kl
x
5
nx{1- x)
_1_
62n2
::; 4n62 .
Ix- !-1~6
Collecting terms, we finally get
which completes the proof.
Bernsteins's theorem provides a probabilistic constructive proof of Weierstrass theorem which asserts that every continuous function over a compact set can be uniformly approximated by a sequence of polynomials. Its "connectionist" interpretation is that every reasonable function can be computed by a two layer network
consisting of one array of equally spaced bell-shaped detectors feeding into one linear output unit. In addition, the weighting function H{k) is the function 1 itself
(see also Baldi et al. (1988)). Notice that the shape of the functions Bn ( k, x) in the
array depends on k: in the center (k : : : : n/2) they are very symmetric and similar to
gaussians, as one moves towards the periphery the shape becomes less symmetric.
Two additional significant properties of Bernstein polynomials are that, for fixed
n, they form a partition of unity: Ek Bn(k, x) = (x + (1 - x))n = 1 and that they
1/(n + 1). One important advantage of the
have constant energy f01 Bn(k, x)
approximation defined by (5) is its great smoothness. If 1 is differentiable, then not
only (5) holds but also
=
(n)
. -d
d (~
dl
hm
~ 1(k)
-n k x k ( 1 - x )n - k) -+-d
x
x
n-oo
k=O
(6)
737
738
Baldi
uniformly on [0,1] and the same is true for higher order derivatives (see, for instance,
Davis (1963?. Thus the Bernstein polynomials provide simultaneous approximation of a function and of its derivatives. In particular, they preserve the convexity
properties of the function f being approximated and mimic extremely well its qualitative behavior. The price to be paid is in precision, for the convergence in (5)
can sometimes be slow. Good qualitative properties of the approximation may
be relevant for biological systems, whereas precision there may not be a problem,
especially in light of the fact that n is often large.
Finally, this approach can be extended to the general case of an input space with d
dimensions by defining the generalized Bernstein polynomials
If f(Xl, ... ,Xd) is a continuous function over the hypercube [0, l]d, then
(8)
approaches uniformly
3
f on [0, l]d as min ni
-+ 00.
LEARNING AND THE HEAT EQUATION
Consider again the general problem of approximating a function f by a linear combination of bell-shaped functions, but where now the bell-shaped functions are gaussians B( w, x), of the form
B( W,X ) =
1 e-(x-w)2/2 q 2
V 27rU
~
(9)
The fixed centers w of the gaussians are distributed in space according to a density
p( w) (this enables one to treat the continuous and discrete case together and also
to include the case where the centers are not evenly distributed). This idea was
directly suggested by a presentation of R. Durbin (1990), where the limiting case of
an infinite number oflogistic hidden units in a connectionist network was considered.
In this setting, we are trying to express f as
f(x):::::::
1
+00
h(w)
-00
or
f(x) : : : :
1+
00
-00
1
..J2;u
2
2
e-(x-w) /217 p(w)dw
1 H(w)e-(x-w)2/ 2q 2dw
V27ru
(10)
(11)
where H = hp. Now, diffusion processes or propagation of heat are usually modeled
by a partial differential equation of the type
(12)
Computing with Arrays of Bell-Shaped and Sigmoid functions
(the heat equation) where u(x, t) represents the temperature (or the concentration)
at position x at time t. Given a set of initial conditions of the form u( x, 0) g( x),
then the distribution of temperatures at time t is given by
=
u(x, t)
1
=
+00
1
g(w) _ _ e-(x-w)
2
/ 4t dw.
(13)
v47rt
-00
Technically, (13) can be shown to give the correct distribution of temperatures at
time t provided 9 is continuous, Ig(x)1
O(exp(hx 2 and 0 ~ t < 1/4h. Under
these conditions, it can be seen that u(x, t) O(exp(kx2? for some constant k > 0
(depending on h) and is the unique solution satisfying this property (see Friedman
(1964) and John (1975) for more details).
=
=
?
The connection to our problem now becomes obvious. If the initial set of temperatures is equal to the weights in the network (H(w) = g(w?, then the function
0'2/2. Given
computed by the network is equal to the temperature at x at time t
a function f(x) we can view it as a description of temperature values at time 0'2/2;
the problem of learning, i. e. of determining the optimal h( w) (or H( w?) consists in
0 from which f could have
finding a distribution of initial temperatures at time t
evolved. In this sense, learning is equivalent to reversing time in a diffusion process.
If the continuous case is viewed as a limiting case where units with bell-shaped
tuning curves are very densely packed, then it is reasonable to consider that, as
the density is increased, the width 0' of the curves tends to O. As 0' ~ 0, the final
distribution of temperatures approaches the initial one and this is another heuristic
way of seeing why the weighting function H (w) is identical to the function being
learnt.
=
=
In the course of a diffusion or heat propagation process, the integral of the concentration (or of the temperature) remains constant in time. Thus the temperature
distribution is similar to a probability distribution and we can define its entropy
E(u(x, t?
=
-1:
00
u(x, t) In u(x, t)dx.
(14)
It is easy to see that the heat equation tends to increase E. Therefore learning can
also be viewed as a process by which E is minimized (within certain time boundaries
constraints). This is intuitively clear if we think oflearning as an attempt to evolve
an initially random distribution of connection weights and concentrate it in one or
a few restricted regions of space.
In general, the problem of solving the heat equation backwards in time is difficult:
physically it is an irreversible process and mathematically the problem is ill-posed
in the sense of Hadamard. The solution does not always exist (for instance, the
final set of temperatures must be an analytic function), or exists only over a limited
period of time and, most of all, small changes in the final set of temperatures can
lead to large changes in the initial set of temperatures) (see, for instance, John
(1955?. However, the problem becomes well-posed if the final set of temperatures
has a compact Fourier spectrum (see Miranker (1961); alternatively, one could use
a regularization approach as in Franklin (1974?. In a connectionist framework, one
usually seeks a least square approximation to a given function. The corresponding
error functional is convex (the heat equation is linear) and therefore a solution
always exists. In addition, the problem is usually not ill-posed because the functions
739
740
Baldi
to be learnt have a bounded spectrum and are often known only through a finite
sample. Thus learning from examples in networks consisting of one hidden layer
of gaussians units and a linear output unit is relatively straightforward, for the
landscape of the usual error function has no local minima and the optimal set of
weights can be found by gradient descent or directly, essentially by linear regression.
To be more precise, we can write the error function in the most general case in the
form:
E(h(w?
= j[/(x) -
j h(u)e-(X-tJ)2/ 202Jl(u)du]2v(x)dx
(15)
where Jl and v are the measures defined on the weights and the examples respectively. The gradient, as in the usual back-propagation of errors, is given by:
BE = -2j[/(X) _ jH(u)e-(X-tJ)2/ 202du]e-(x-w)2/ 202Jl(w)v(x)dx
Bh(w)
.
Thus the critical weights of (15) where I-'(w)
~
(16)
0 are characterized by the relation
j I(x )e-(x-w)2/ 202v(x)dx = j j H( u)e-(x-w)2/ 202e-(x-u)2/20 2v(x)dudx. (17)
If now we assume that the centers of the gaussians in the hidden layer occupy a
(finite or infinite) set of isolated points Wi, (17) can be rewritten in matrix form as
B
= AH(u)
(18)
where Bi = f I(x)exp(-(x - Wi)2/2u 2)v(x)dx, H(u)j = h(Ui)Jl(Ui) and A is the
real symmetric matrix with entries
Ai;
=j
e-(x-w i )2/20 2e-(x-tJj)2/20 2v(x)dx.
Usually A is invertible, so that H(u)
= A-I B
(19)
which, in turn, yields h(Ui)
=
H(Ui)/ Jl( Ui).
Finally, everything can be extended without any difficulty to d dimensions, where
the typical solution of '\]2u = Bu/at is given by
U(Xl, ... ,Xd,t) =
1
- E .(x.-w.)2/
g(w)(
)d/2e.
1-00+00 ... 1+00
-00
47rt
with, under some smoothness assumptions, u(x, t)
~
g(x) as t
4t
dWl ... dwd
~
(20)
O.
Remark
For an application to a discrete setting consider, as in Baldi et al. (1988), the sum
For an initial gaussian distribution of temperatures u(x,O) of the form
(1/Vf;) exp( _x 2/2rp), the distribution u(x, t) of temperatures at time t is also
gaussian, centered at the origin, but with a larger standard deviation which, using
(13), is given by (17 2 +2t)I/2. Thus, if we imagine that at time 0 a temperature equal
Computing with Arrays of Bell-Shaped and Sigmoid functions
to k has been injected (with a very small "I) at each integer location along the real
axis, then lex) represents the distribution of temperatures at time t (0'2 - "1 2 )/2.
Intuitively, it is clear that as 0' is increased (i.e. as we wait longer) the distribution
of temperatures becomes more and more linear.
=
(2) It is aesthetically pleasing that the theory of the heat equation can also be
used to give a proof of Weierstrass theorem. For this purpose, it is sufficient to
observe that, for a given continuous function 9 defined over a closed interval [a, b],
the function u(x, t) given by (13) is an analytic function in x at a fixed time t.
By letting t --+ 0 and truncating the corresponding series, one can get polynomial
approximations converging uniformly to g.
4
THE SIGMOID CASE
We now consider the case of a neural network with one hidden layer of sigmoids and
one linear output unit. The output of the network can be written as a transform
out(x)
=
J
O'(w.x)h(w)J.l(w)dw
(21)
where x is the input vector and w is a weight vector which is characteristic of each
hidden unit (i. e. each hidden units is characterized by the vector of weights on its
incoming input lines rather than, for instance, its spatial location). Assume that
the inputs and the weights are normalized, i.e. IIxll = Ilwll = 1 and that the weight
vectors cover the n-dimensional sphere uniformly (or, in the limit, that there is a
vector for each point on the sphere). Then for a given input x, the scalar products
w.x are maximal and close to 1 in the region of the sphere corresponding to hidden
units where wand x are colinear and decay as we move away till they reach negative
values close to -1 in the antipodal region. When these scalar products are passed
through an appropriate sigmoid, a bell-shaped pattern of activity is created on the
surface of the sphere and from then on we are reduced to the previous case. Thus the
previous results can be extended and in particular we have a heuristic simple proof
that the corresponding networks have universal approximation properties (see, for
instance, Hornik et al. (1989?. Notice that intuitively the reason is simple for we
end up we something like a grand-mother cell per pattern or cluster of patterns.
If we assume that initially J.l( w) -# 0 everywhere, then it is clear that for learning
via LMS optimization we can take J.l to be fixed and adjust only the output weights
h. But the problem then is convex and without local minima. This suggests that in
the limit of an extremely large number of hidden units, the landscape of the error
function is devoid of local minima and learning becomes very smooth. This result
is consistent with the conjecture that under reasonable assumptions, as we progressively increase the number of hidden units, learning goes from being impossible, to
being possible but difficult and lengthy, to being relatively easy and quick to trivial.
And if so what is the nature of these transitions? This picture is also consistent
with certain simulation results reported by several authors, whereby optimal performance and generalization is not best obtained by training for a very long time a
minimal size highly constrained network, but rather by training for a shorter time
(until the validation error begins to go up (see Baldi and Chauvin (1991?) a larger
network with extra hidden units.
741
742
Baldi
Acknowledgements
This work is supported by NSF grant DMS-8914302 and ONR contract NAS7100/918. We would like to thank Y. Rinott for useful discussions.
References
Baldi, P. and Heiligenberg, W. (1988) How sensory maps could enhance resolution
through ordered arrangements of broadly tuned receivers. Biological Cybernetics,
59, 313-318.
Baldi, P. and Chauvin, Y. (1991) A study of generalization in simple networks.
Submitted for publication.
Davis, P. J. (1963) Interpolation and approximation. Blaisdell.
Durbin, R. (1990) Presented at the Neural Networks for Computing Conference,
Snowbird, Utah.
Feller, W. (1971) An introduction to probability theory and its applications. John
Wiley & Sons
Franklin, J. N. (1974) On Tikhonov's method for ill-posed problems. Mathematics
of Computation, 28, 128, 889-907.
Friedman, A. (1964) Partial differential equations of parabolic type. Prentice-Hall.
Hornik, K., Stinchcombe, M. and White, H. (1989) Multilayer feedforward networks
are universal approximators. Neural Networks, 2, 5, 359-366.
John, F. (1955) Numerical solutions of the equation of heat conduction for preceding
times. Ann. Mat. Pura Appl., ser. IV, vol. 40, 129-142.
John, F. (1975) Partial differential equations. Springer Verlag.
Miranker, W. L. (1961) A well posed problem for the backward heat equation.
Proceedings American Mathematical Society, 12, 243-247.
Moody, J. and Darken, C. J. (1989) Fast learning in networks of locally-tuned
processing units. Neural Computation, 1, 2, 281-294.
Poggio, T. and Girosi, F. (1990) Regularization algorithms for learning that are
equivalent to multilayer networks. Science, 241, 978-982.
| 412 |@word trial:1 briefly:3 polynomial:11 suitably:1 seek:1 simulation:1 bn:6 paid:1 initial:6 series:2 tuned:2 franklin:2 varx:1 universality:1 dx:6 must:1 written:1 john:5 numerical:1 partition:1 shape:2 enables:1 analytic:2 girosi:1 progressively:1 xk:1 weierstrass:2 provides:1 location:2 mathematical:2 along:1 differential:3 ouput:1 qualitative:2 consists:1 baldi:11 expected:1 behavior:1 antipodal:1 becomes:5 provided:1 begin:1 bounded:4 moreover:1 what:1 evolved:1 interpreted:1 finding:3 every:2 collecting:1 xd:2 ser:1 unit:15 grant:1 local:3 treat:1 tends:2 limit:2 irreversible:1 interpolation:2 approximately:1 examined:2 suggests:1 appl:1 limited:1 bi:1 unique:2 xr:1 universal:4 bell:17 seeing:1 wait:1 get:2 close:2 bh:1 prentice:1 context:1 impossible:1 equivalent:4 map:1 quick:1 center:4 straightforward:2 go:2 duration:1 independently:1 convex:2 resolution:2 truncating:1 simplicity:1 array:7 dw:4 limiting:2 imagine:1 origin:1 approximated:3 satisfying:2 region:3 feller:2 convexity:1 ui:5 solving:1 colinear:1 technically:1 division:1 heat:13 fast:1 heuristic:2 posed:5 larger:2 think:1 transform:1 itself:1 final:4 sequence:2 advantage:1 differentiable:1 product:2 maximal:1 j2:1 relevant:4 hadamard:1 till:1 description:1 asserts:1 convergence:1 bellshaped:1 enhancement:1 cluster:1 oo:1 depending:1 snowbird:1 aesthetically:1 indicate:1 concentrate:1 correct:1 centered:1 everything:1 feeding:1 hx:1 generalization:4 biological:3 elementary:1 mathematically:1 extension:1 hold:1 considered:2 hall:1 exp:4 great:1 lm:1 purpose:1 title:1 weighted:1 always:3 gaussian:2 rather:2 tchebycheff:1 publication:1 sense:2 initially:2 pasadena:1 hidden:15 relation:1 sketched:1 orientation:1 ill:3 spatial:1 constrained:1 equal:3 shaped:17 having:1 biology:1 represents:3 identical:1 mimic:1 minimized:1 connectionist:3 stimulus:1 few:1 preserve:1 densely:1 consisting:2 friedman:2 attempt:1 pleasing:1 highly:1 adjust:1 yielding:1 light:1 tj:2 integral:1 partial:3 poggio:2 shorter:1 iv:1 isolated:1 minimal:1 instance:7 increased:2 cover:1 oflearning:1 deviation:1 entry:1 delay:1 reported:1 conduction:1 learnt:2 density:2 grand:1 devoid:1 bu:1 probabilistic:1 contract:1 invertible:1 enhance:1 together:1 moody:2 again:1 iixll:1 ek:1 derivative:2 american:1 depends:1 view:1 closed:1 square:1 ni:1 characteristic:1 yield:2 spaced:1 landscape:2 rinott:1 cybernetics:1 ah:1 submitted:1 detector:1 simultaneous:1 reach:2 whenever:1 lengthy:1 energy:1 frequency:1 obvious:1 dm:1 proof:7 back:1 feed:2 higher:1 response:1 generality:1 until:1 hand:3 propagation:3 continuity:1 utah:1 concept:1 true:2 normalized:1 former:1 regularization:2 read:1 symmetric:3 laboratory:1 white:1 width:1 davis:2 whereby:1 generalized:1 trying:1 complete:1 temperature:19 heiligenberg:1 ranging:1 sigmoid:10 functional:1 jl:5 tail:1 interpretation:1 interpret:1 significant:1 ai:1 mother:1 smoothness:2 tuning:1 mathematics:1 hp:1 longer:1 surface:1 something:1 periphery:1 tikhonov:1 certain:2 verlag:1 inequality:2 onr:1 success:4 kx2:1 approximators:1 seen:1 minimum:3 additional:1 preceding:1 period:1 dwd:1 smooth:1 jet:1 characterized:2 sphere:4 long:1 equally:1 converging:1 regression:1 multilayer:2 essentially:1 metric:1 physically:1 sometimes:1 cell:1 addition:3 whereas:2 interval:3 completes:1 extra:1 integer:2 backwards:1 bernstein:11 feedforward:1 easy:4 enough:1 idea:3 passed:1 remark:1 useful:1 clear:3 locally:1 reduced:1 occupy:1 exist:2 nsf:1 notice:2 per:1 broadly:1 discrete:2 write:1 mat:1 vol:1 express:1 terminology:1 diffusion:5 backward:1 sum:2 wand:1 everywhere:3 injected:1 parabolic:1 reasonable:5 vf:2 bound:2 layer:10 durbin:2 activity:1 precisely:1 constraint:1 fourier:1 extremely:2 min:1 f01:1 relatively:2 conjecture:3 according:1 combination:3 blaisdell:1 son:1 unity:1 wi:2 intuitively:3 restricted:1 equation:14 remains:2 turn:1 letting:1 end:1 gaussians:5 rewritten:1 observe:1 away:1 appropriate:1 pierre:1 coin:2 rp:1 binomial:1 include:1 especially:1 approximating:2 hypercube:1 society:1 move:2 question:1 lex:1 flipping:1 arrangement:1 concentration:2 rt:1 usual:2 gradient:2 thank:1 propulsion:1 nx:1 evenly:1 trivial:1 reason:1 chauvin:2 ru:1 modeled:1 equivalently:1 difficult:2 negative:1 packed:1 neuron:1 darken:1 finite:2 descent:1 defining:1 extended:3 precise:1 kl:1 connection:2 california:2 suggested:1 usually:4 pattern:3 stinchcombe:1 critical:1 natural:1 difficulty:1 beautiful:1 technology:2 picture:1 axis:1 created:1 hm:1 understanding:1 acknowledgement:1 evolve:1 determining:1 loss:1 tjj:1 remarkable:1 validation:1 degree:3 sufficient:1 consistent:2 course:3 supported:1 soon:1 side:1 jh:1 institute:2 distributed:2 curve:3 dimension:3 boundary:1 transition:1 sensory:1 forward:2 commonly:1 author:1 ig:1 approximate:1 compact:2 incoming:1 receiver:1 alternatively:1 spectrum:2 continuous:10 why:1 nature:1 transfer:3 ca:1 hornik:2 du:2 n2:1 slow:1 wiley:1 precision:3 position:2 xl:2 weighting:2 ix:3 kin:1 theorem:5 xt:1 decay:1 dl:1 exists:2 ilwll:1 sigmoids:1 entropy:1 ordered:1 scalar:2 springer:1 corresponds:1 identity:1 presentation:1 viewed:2 ann:1 towards:1 toss:1 price:1 change:2 infinite:2 typical:1 uniformly:7 reversing:3 miranker:2 dwl:1 latter:1 constructive:1 |
3,446 | 4,120 | Estimating Spatial Layout of Rooms using Volumetric
Reasoning about Objects and Surfaces
David C. Lee, Abhinav Gupta, Martial Hebert, Takeo Kanade
Carnegie Mellon University
{dclee,abhinavg,hebert,tk}@cs.cmu.edu
Abstract
There has been a recent push in extraction of 3D spatial layout of scenes. However,
none of these approaches model the 3D interaction between objects and the spatial
layout. In this paper, we argue for a parametric representation of objects in 3D,
which allows us to incorporate volumetric constraints of the physical world. We
show that augmenting current structured prediction techniques with volumetric
reasoning significantly improves the performance of the state-of-the-art.
1
Introduction
Consider the indoor image shown in Figure 1. Understanding such a complex scene not only involves visual recognition of objects but also requires extracting the 3D spatial layout of the room
(ceiling, floor and walls). Extraction of the spatial layout of a room provides crucial geometric context required for visual recognition. There has been a recent push to extract spatial layout of the
room by classifiers which predict qualitative surface orientation labels (floor, ceiling, left, right, center wall and object) from appearance features and then fit a parametric model of the room. However,
such an approach is limited in that it does not use the additional information conveyed by the configuration of objects in the room and, therefore, it fails to use all of the available cues for estimating
the spatial layout.
In this paper, we propose to incorporate an explicit volumetric representation of objects in 3D for
spatial interpretation process. Unlike previous approaches which model objects by their projection
in the image plane, we propose a parametric representation of the 3D volumes occupied by objects
in the scene. We show that such a parametric representation of the volume occupied by an object
can provide crucial evidence for estimating the spatial layout of the rooms. This evidence comes
from volumetric reasoning between the objects in the room and the spatial layout of the room. We
propose to augment the existing structured classification approaches with volumetric reasoning in
3D for extracting the spatial layout of the room.
Figure 1 shows an example of a case where volumetric reasoning is crucial in estimating the surface
layout of the room. Figure 1(b) shows the estimated spatial layout for the room (overlaid on surface
orientation labels predicted by a classifier) when no reasoning about the objects is performed. In
this case, the couch is predicted as floor and therefore there is substantial error in estimating the
spatial layout. If the couch is predicted as clutter and the image evidence from the couch is ignored
(Figure 1(c)), multiple room hypotheses can be selected based on the predicted labels of the pixels on
the wall (Figure 1(d)) and there is still not enough evidence in the image to select one hypothesis over
another in a confident manner. However, if we represent the object by a 3D parametric model, such
as a cuboid (Figure 1(e)), then simple volumetric reasoning (the 3D volume occupied by the couch
should be contained in the free space of the room) can help us reject physically invalid hypotheses
and estimate the correct layout of the room by pushing the walls to completely contain the cuboid
(Figure 1(f)).
In this paper, we propose a method to perform volumetric reasoning by combining classical constrained search techniques and current structured prediction techniques. We show that the resulting
1
(b) Spatial layout without
object reasoning
(c) Object removed
(d) Spatial layout with 2D object reasoning
shes
t pu
bjec
wall
O
(a) Input image
(e) Object fitted with
parametric model
(f) Spatial layout with 3D volumetric reasoning
Figure 1: (a) Input image. (b) Estimate of the spatial layout of the room without object reasoning.
Colors represent the output of the surface geometry by [8]. Green: floor, red: left wall, yellow:
center wall, cyan: right wall. (c) Evidence from object region removed. (d) Spatial layout with 2D
object reasoning. (e) Object fitted with 3D parametric model. (f) Spatial layout with 3D volumetric
reasoning. The wall is pushed by the volume occupied by the object.
approach leads to substantially improved performance on standard datasets with the added benefit
of a more complete scene description that includes objects in addition to surface layout.
1.1
Background
The goal of extracting 3D geometry by using geometric relationships between objects dates back
to the start of computer vision around four decades ago. In the early days of computer vision,
researchers extracted lines from ?blockworld? scenes [1] and used geometric relationships using
constraint satisfaction algorithms on junctions [2, 3]. However, the reasoning approaches used in
these block world scenarios (synthetic line drawings) proved too brittle for the real-world images
and could not handle the errors in extraction of line-segments or generalize to other shapes.
In recent years, there has been renewed interest in extracting camera parameters and threedimensional structures in restricted domains such as Manhattan Worlds [4]. Kosecka et al. [5]
developed a method to recover vanishing points and camera parameters from a single image by
using line segments found in Manhattan structures. Using the recovered vanishing points, rectangular surfaces aligned with major orientations were also detected by [6]. However, these approaches
are only concerned with dominant directions in the 3D world and do not attempt extract three dimensional information of the room and the objects in the room. Yu et al. [7] inferred the relative
depth-order of rectangular surfaces by considering their relationship. However, this method only
provides depth cues of partial rectangular regions in the image and not the entire scene.
There has been a recent series of methods related to our work that attempt to model geometric
scene structure from a single image, including geometric label classification [8, 9] and finding vertical/ground fold-lines [10]. Lee et al. [11] introduced parameterized models of indoor environments,
constrained by rules inspired by blockworld to guarantee physical validity. However, since this approach samples possible spatial layout hypothesis without clutter, it is prone to errors caused by the
occlusion and tend to fit rooms in which the walls coincide with the object surfaces. A recent paper
by Hedau et al. [12] uses an appearance based clutter classifier and computes visual features only
from the regions classified as ?non-clutter?, while parameterizing the 3D structure of the scene by a
box. They use structured approaches to estimate the best fitting room box to the image. A similar
approach has been used by Wang et al. [13] which does not require the ground truth lables of clutter. In these methods, however, the modeling of interactions between clutter and spatial-layout of
the room is only done in the image plane and the 3D interactions between room and clutter are not
considered.
2
In a work concurrent to ours, Hedau et al. [14] have also modeled objects as three dimensional
cuboids and considered the volumetric intersection with the room structure. The goal of their work
differs from ours. Their primary goal is to improve object detection, such as beds, by using information of scene geometry, whereas our goal is to improve scene understanding by proposing a control
structure that incorporates volumetric constraints. Therefore, we are able to improve the estimate of
the room by estimating the objects and vice versa, whereas in their work information flows in only
one direction (from scene to objects).
In a very recent work by Gupta et al. [15], qualitative reasoning of scene geometry was done by
modeling objects as ?blocks? for outdoor scenes. In contrast, we use stronger parameteric models
for rooms and objects in indoor scenes, which are more structured, that allows us to do more explicit
and exact 3D volumetric reasoning.
2
Overview
Our goal is to jointly extract the spatial layout of the room and the configuration of objects in the
scene. We model the spatial layout of the room by 3D boxes and we model the objects as solids
which occupy 3D volumes in the free space defined by the room walls. Given a set of room hypotheses and object hypotheses, our goal is to search the space of scene configurations and select
the configuration that best matches the local surface geometry estimated from image cues and satisfies the volumetric constraints of the physical world. These constraints (shown in Figure 3(i))
are:
? Finite volume: Every object in the world should have a non-zero finite volume.
? Spatial exclusion: The objects are assumed to be solid objects which cannot intersect.
Therefore, the volumes occupied by different object are mutually exclusive. This implies
that the volumetric intersection between two objects should be empty.
? Containment: Every object should be contained in the free space defined by the walls of
the room (i.e, none of the objects should be outside the room walls).
Our approach is illustrated in Figure 2. We first extract line segments and estimate three mutually
orthogonal vanishing points (Figure 2(b)). The vanishing points define the orientation of the major
surfaces in the scene [6, 11, 12] and hence constrain the layout of ceilings, floor and walls of the
room. Using the line segments labeled by their orientations, we then generate multiple hypotheses
for rooms and objects (Figure 2(e)(f)). A hypothesis of a room is a 3D parametric representation of
the layout of major surfaces of the scene, such as floor, left wall, center wall, right wall, and ceiling.
A hypothesis of an object is a 3D parametric representation of an object in the scene, approximated
as a cuboid.
The room and cuboid hypotheses are then combined to form the set of possible configurations of
the entire scene (Figure 2(h)). The configuration of the entire scene is represented as one sample of
the room hypothesis along with some subset of object hypotheses. The number of possible scene
configurations is exponential in the number of object hypotheses 1 . However, not all cuboid and
room subsets are compatible with each other. We use simple 3D spatial reasoning to enforce the
volumetric constraints described above (See Figure 2(g)). We therefore test each room-object pair
and each object-object pair for their 3D volumetric compatibility, so that we allow only the scene
configurations which have no room-object and no object-object volumetric intersection.
Finally, we evaluate the scene configurations created by combinations of room hypotheses and object
hypotheses to find the scene configuration that best matches the image (Figure 2(i)). As the scene
configuration is a structured variable, we use a variant of the structured prediction algorithm [16] to
learn the cost function. We use two sources of surface geometry, orientation map [11] and geometric
context [8], which serve as features in the cost function. Since it is computationally expensive to
test exhaustive combinations of scene configurations in practice, we use beam-search to sample the
scene configurations that are volumetrically-compatible (Section 5.1).
3
Estimating Surface Geometry
We would like to predict the local surface geometry of the regions in the image. A scene configuration should satisfy local surface geometry extracted from image cues and should satisfy the 3D
1
O(n ? 2m ) where n is the number of room hypotheses and m is the number of object hypotheses
3
(a) Input image
(b) Line segments and
Vanishing points
(e) Room hypotheses
(g) Reject invalid
configurations
(c) Geometric context
(d) Orientation map
(f) Cube hypotheses
(i) Evaluate
(h) Scene configuration hypotheses
(j) Final scene
configuration
Figure 2: Overview of our approach for estimating the spatial layout of the room and the objects.
volumetric constraints. The estimated surface geometry is therefore used as features in a scoring
function that evaluates a given scene configuration.
For estimating surface geometry we use two methods: the line-sweeping algorithm [11] and a multiple segmentation classifier [8]. The line-sweeping algorithm takes line segments as input and
predicts an orientation map in which regions are classified as surfaces into one of the three possible
orientations. Figure 2(d) shows an example of an orientation map. The region estimated as horizontal surface is colored in red, and vertical surfaces are colored in green and blue, corresponding
to the associated vanishing point. This orientation map is used to evaluate scene configuration hypotheses. The multiple segmentation classifier [8] takes the full image as input, uses image features,
such as combinations of color and texture, and predicts geometric context represented by surface
geometry labels for each superpixel (floor, ceiling, vertical (left, center, right), solid, and porous
regions). Similar to orientation maps, the predicted labels are used to evaluate scene configuration
hypotheses.
4
Generating Scene Configuration Hypothesis
Given the local surface geometry and the oriented line segments extracted from the image, we now
create multiple hypotheses for possible spatial layout of the room and object layout in the room.
These hypotheses are then combined to produce scene configuration layout such that all the objects
occupy exclusive 3D volumes and the objects are inside the freespace of the room defined by the
walls.
4.1
Generating Room Hypotheses
A room hypothesis encodes the position and orientation of walls, floor, and ceiling. In this paper, we
represent a room hypothesis by a parametric box model [12]. Room hypotheses are generated from
line segments in a way similar to the method described in Lee et al. [11]. They examine exhaustive
combinations of line segments and check which of the resulting combinations define physically valid
room models. Instead, we sample random tuples of line segments lines that define the boundaries
of the parametric box. Only the minimum number of line segments to define the parametric room
model are sampled. Figure 2(e) shows examples of generated room hypotheses.
4
(a) Containment Constraint
(a) Image
(b) Orientation Map
e
edg
vex
Con
(c) Convex Edge Check
(b) Spatial Exclusion Constraint
(i) Volumetric Constraints
(d) Hypothesized Cuboid
(ii) Object Hypothesis Generation
Figure 3: (i) Examples of volumetric constraint violation. (ii) Object hypothesis generation: we use
the orientation maps to generate object hypotheses by finding convex edges.
4.2
Generating Object Hypotheses
Our goal is to extract the 3D geometry of the clutter objects to perform 3D spatial reasoning. Estimating precise 3D models of objects from a single image is an extremely difficult problem and
probably requires recognition of object classes such as couches and tables. However, our goal is to
perform coarse 3D reasoning about the spatial layout of rooms and spatial layout of objects in the
room. We only need to model a subset of objects in the scene to provide enough constraints for
volumetric reasoning. Therefore, we adopt a coarse 3D model of objects in the scene and model
each object-volume as cuboids. We found that parameterizing objects as cuboids provides a good
approximation to the occupied volume in man-made environments. Furthermore, by modeling objects by a parametric model of a cuboid, we can determine the location and dimensions in 3D up to
scale, which allows volumetric reasoning about the 3D interaction between objects and the room.
We generate object hypotheses from the orientation map described above. Figure 3(ii)(a)(b) shows
an example scene and its orientation map. The three colors represent the three possible plane orientations used in the orientation map. We can see from the figure that the distribution of surfaces on the
objects estimated by the orientation map suggests the presence of a cuboidal object. Figure 3(ii)(c)
shows a pair of regions which can potentially form a convex edge if the regions represent the visible
surfaces on a cuboidal object.
We test all pairs of regions in the orientation map to check whether they can form convex edges.
This is achieved by checking the estimated orientation of the regions and the spatial location of the
regions with respect to the vanishing points. If the region pair can form a convex corner, we utilize
these regions to form an object hypothesis. To generate a cuboidal object hypothesis from pairs of
regions, we first fit tight bounding quadrilaterals (Figure 3(ii)(c)) to each region in the pair and then
sample all combinations of three points out of the eight vertices on the two quadrilaterals, which do
not lie on a plane. Three is the minimum number of points (with (x, y) coordinates) that have enough
information to define a cuboid projected onto a 2D image plane, which has five degrees of freedom.
We can then hypothesize a cuboid, whose corner best apprximates the three points. Figure 3(ii)(d)
shows a sample of a cuboidal object hypothesis generated from the given orientation map.
4.3
Volumetric Compatibility of Scene Configuration
Given a room configuration and a set of candidate objects, a key operation is to evaluate whether the
resulting combination satisfies the three fundamental volumetric compatibility constraints described
in Section 2. The problem of estimating the three dimensional layout of a scene from a single image
is inherently ambiguous because any measurement from a single image can only be determined up
to scale. In order to test the volumetric compatibility of room-object hypotheses pairs and objectobject hypotheses pairs, we make the assumption that all objects rest on the floor. This assumption
fixes the scale ambiguity between room and object hypotheses and allows us to reason about their
3D location.
To test whether an object is contained within the free space of a room, we check whether the projection of the bottom surface of the object onto the image is completely contained within the projection
of the floor surface of the room. If the projection of the bottom surface of the object is not completely
5
within the floor surface, the corresponding 3D object model must be protruding into the walls of the
room. Figure 3(i)(a) shows an example of an incompatible room-object pair.
Similarly, to test whether the volume occupied by two objects is exclusive, we assume that the
two objects rest on the same floor plane and we compare the projection of their bottom surfaces
onto the image. If there is any overlap between the projections of the bottom surface of the two
object hypotheses, that means that they occupy intersecting volumes in 3D. Figure 3(i)(b) shows an
example of an incompatible object-object pair.
5
Evaluating Scene Configurations
5.1
Inference
Given an image x, a set of room hypotheses {r1 , r2 , ..., rn }, and a set of object hypotheses
{o1 , o2 , ..., om }, our goal is to find the best scene configuration y = (yr , yo ), where yr =
(yr1 , ..., yrn ), yo = (yo1 , ..., yom ). yri = 1 if room hypothesis ri is used in the scene configuration
and yri = 0 otherwise, and yoiP= 1 if object hypothesis oi is present in the scene configuration and
yoi = 0 otherwise. Note that i yri = 1 as only one room hypothesis is needed to define the scene
configuration.
Suppose that we are given a function f (x, y) that returns a score for y. Finding the best scene
configuration y? = arg maxy f (x, y) through testing all possible scene configurations requires
n ? 2m evaluations of the score function. We resort to using beam search (fixed width search tree) to
keep the computation manageable by avoiding evaluating all scene configurations.
In the first level of the search tree, scene configurations with a room hypothesis and no object hypothesis are evaluated. In the following levels, an object hypothesis is added to its parent configuration
and the configuration is evaluated. The top kl nodes with the highest score are added to the search
tree as the child node, where kl is a pre-determined beam width for level l.2 The search is continued
for a fixed number of levels or until no cubes that are compatible with existing configurations can
be added. After the search tree has been explored, the best scoring node in the tree is returned as the
best scene configuration.
5.2
Learning the Score Function
We set the score function to f (x, y) = wT ?(x, y) + w?T ?(y), where ?(x, y) is a feature vector
for a given image x and measures the compatibility of the scene configuration y with the estimated
surface geometry. ?(y) is the penalty term for incompatible configurations and penalizes the room
and object configurations which violate volumetric constraints.
We use structured SVM [16] to learn the weight vector w. The weights are learned by solving
X
1
2
?i
min kwk + C
w,? 2
i
s.t. wT ?(xi , yi ) ? wT ?(xi , y) ? w?T ?(y) ? ?(yi , y) ? ?i , ?i, ?y
?i ? 0, ?i,
where xi are images, yi are the ground truth configuration, ?i are slack variables, and ?(yi , y)
is the loss function that measures the error of configuration y. Tsochantaridis [16] deals with the
large number of constraints by iteratively adding the most violated constraints. We simplify this
by sampling a fixed number of configurations per each training image, using the same beam search
process used for inference, and solving using quadratic programming.
Loss Function: The loss function ?(yi , y) is the percentage of pixels in the entire image having
incorrect label. For example, pixels that are labeled as left wall when they actually belong to the
center wall, or pixels labeled as object when they actually belong to the floor would be counted as
incorrectly labeled pixels. A wall is labeled as center if the surface normal is within 45 degrees from
the camera optical axis and labeled as left or right, otherwise.
Feature Vector: The feature vector ?(x, y) is computed by measuring how well each surface in
the scene configuration y is supported by the orientation map and the geometric context. A feature
2
We set kl to (100, 5, 2, 1), with a maximum of 4 levels. The results were not sensitive to these parameters.
6
Input image
Orientation map
Geometric context
Room only
Room and objects
Figure 4: Two qualitative examples showing how 3D volumetric reasoning aids estimation of the
spatial layout of the room.
No object reasoning
Volumetric reasoning
OM+GC
OM
GC
18.6%
16.2%
24.7%
19.5%
22.7%
20.2%
Table 1: Percentage of pixels with correct estimate of room surfaces. First row performs no reasoning about objects. Second row is our approach with 3D volumetric reasoning of objects. Columns
shows the features that are used. OM: Orientation map from [11]. GC: Geometric context from [8].
is computed for each of the six surfaces in the scene configuration (floor, left wall, center wall,
right wall, ceiling, object) as the relative area which the orientation map or the geometric context
correctly explains the attribute of the surface. This results in a twelve dimensional feature vector for
a given scene configuration. For example, the feature for the floor surface in the scene configuration
is computed by the relative area which the orientation map predicts a horizontal surface, and the area
which the geometric context predicts a floor label.
Volumetric Penalty: The penalty term ?(y) measures how much the volumetric constraints are
violated. (1) The first term ?(yr , yo ) measures the volumetric intersection between the volume
defined by room walls and objects. It penalizes the configurations where the object hypothesis
lie outside the room
P volume and the penalty is proportional to the volume outside the room. (2)
The second term i,j ?(yoi , yoj ) measures the volume intersection between two objects (i, j). This
penalty from this term is proportional to the overlap of the cubes projected on the floor.
6
Experimental Results
We evaluated our 3D geometric reasoning approach on an indoor image dataset introduced in [12].
The dataset consists of 314 images, and the ground-truth consists of the marked spatial layout of the
room and the clutter layouts. For our experiments, we use the same training-test split as used in [12]
(209 training and 105 test images). We use training images to estimate the weight vector.
Qualitative Evaluation: Figure 4 illustrates the benefit of 3D spatial reasoning introduced in our
approach. If no 3D clutter reasoning is used and the room box is fitted to the orientation map and
geometric context, the box gets fit to the object surfaces and therefore leads to substantial error
in the spatial layout estimation. However, if we use 3D object reasoning walls get pushed due to
the containment constraint and the spatial layout estimation improves. We can also see from the
examples that extracting a subset of objects in the scene is enough for reasoning and improving
the spatial layout estimation. Figure 5 and 6 shows more examples of the spatial layout and the
estimated clutter objects in the images. Additional results are in the supplementary material.
Quantitative Evaluation: We evaluate the performance of our approach in estimating the spatial
layout of the room. We use the pixel-based measure introduced in [12] which counts the percentage
of pixels on the room surfaces that disagree with the ground truth. For comparison, we employ the
simple multiple segmentation classifier [8] and the recent approach introduced in [12] as baselines.
The images in the dataset have significant clutter; therefore, simple classification based approaches
with no clutter reasoning perform poorly and have an error of 26.5%. The state-of-the-art approach
[12] which utilizes clutter reasoning in the image plane has an error of 21.2%. On the other hand, our
7
Figure 5: Additional examples to show the performance on a wide variety of scenes. Dotted lines
represent the room estimate without object reasoning.
Figure 6: Failure examples. The first two examples are the failure cases when the cuboids are either
missed or estimated wrong. The last two failure cases are due to errors in vanishing point estimation.
approach which uses a parametric model of clutter and simple 3D volumetric reasoning outperforms
both the approaches and has an error of 16.2%.
We also performed several experiments to measure the significance of each step and features in our
approach. When we only use the surface layout estimates from [8] as features of the cost function,
our approach has an error rate of 20.2% whereas using only orientation maps as features yields an
error rate of 19.5%. We also tried several search techniques to search the space of hypotheses. With
a greedy approach (best cube added at each iteration) to search the hypothesis space, we achieved
an error rate of 19.2%, which shows that early commitment to partial configurations leads to error
and search strategy that allows late commitment, such as beam search, should be used.
7
Conclusion
In this paper, we have proposed the use of volumetric reasoning between objects and surfaces of
room layout to recover the spatial layout of a scene. By parametrically representing the 3D volume
of objects and rooms, we can apply constraints for volumetric reasoning, such as spatial exclusion
and containment. Our experiments show that volumetric reasoning improves the estimate of the
room layout and provides a richer interpretation about objects in the scene. The rich geometric
information provided by our method can provide crucial information for object recognition and
eventually aid in complete scene understanding.
8
Acknowledgements
This research was supported by NSF Grant EEEC-0540865, ONR MURI Grant N00014-07-1-0747,
NSF Grant IIS-0905402, and ONR Grant N000141010766.
8
References
[1] L. Roberts. Machine perception of 3-d solids. In: PhD. Thesis. (1965)
[2] A. Guzman. Decomposition of a visual scene into three dimensional bodies. In Proceedings of Fall Joint
Computer Conference, 1968.
[3] D. A. Waltz. Generating semantic descriptions from line drawings of scenes with shadows. Technical
report, MIT, 1972.
[4] J. Coughlan, and A. Yuille. Manhattan world: Compass direction from a single image by bayesian inference. In proc. ICCV, 1999.
[5] J. Kosecka, and W. Zhang. Video Compass. In proc. ECCV, 2002.
[6] J. Kosecka, and W. Zhang. Extraction, matching, and pose recovery based on dominant rectangular structures. CVIU, 2005.
[7] S. Yu, H. Zhang, and J. Malik. Inferring Spatial Layout from A Single Image via Depth-Ordered Grouping. IEEE Computer Society Workshop on Perceptual Organization in Computer Vision, 2008
[8] D. Hoiem, A. Efros, and M. Hebert. Recovering surface layout from an image. IJCV, 75(1), 2007.
[9] A. Saxena, M. Sun, and A. Ng. Make3d: Learning 3D scene structure from a single image. PAMI, 2008.
[10] E. Delage, H. Lee, and A. Ng. A dynamic bayesian network model for autonomous 3D reconstruction
from a single indoor image. CVPR, 2006.
[11] D. Lee, M. Hebert, and T. Kanade. Geometric reasoning for single image structure recovery. In proc.
CVPR, 2009.
[12] V. Hedau, D. Hoiem, and D. Forsyth. Recovering the spatial layout of cluttered rooms. In proc. ICCV,
2009.
[13] H. Wang, S. Gould, and D. Koller, Discriminative Learning with Latent Variables for Cluttered Indoor
Scene Understanding. ECCV, 2010.
[14] V. Hedau, D. Hoiem, and D. Forsyth. Thinking Inside the Box: Using Appearance Models and Context
Based on Room Geometry. ECCV, 2010.
[15] A. Gupta, A. Efros, and M. Hebert. Blocks World Revisited: Image Understanding using Qualitative
Geometry and Mechanics. ECCV, 2010.
[16] I. Tsochantaridis, T. Joachims, T. Hofmann and Y. Altun: Large Margin Methods for Structured and
Interdependent Output Variables, JMLR, Vol. 6, pages 1453-1484, 2005
9
| 4120 |@word manageable:1 stronger:1 tried:1 decomposition:1 solid:4 configuration:49 series:1 score:5 hoiem:3 renewed:1 ours:2 o2:1 existing:2 quadrilateral:2 current:2 recovered:1 outperforms:1 must:1 takeo:1 visible:1 shape:1 hofmann:1 hypothesize:1 cue:4 selected:1 yr:3 greedy:1 plane:7 vanishing:8 coughlan:1 colored:2 provides:4 coarse:2 node:3 location:3 revisited:1 zhang:3 five:1 along:1 qualitative:5 incorrect:1 consists:2 ijcv:1 fitting:1 inside:2 manner:1 examine:1 mechanic:1 inspired:1 considering:1 provided:1 estimating:12 substantially:1 developed:1 proposing:1 finding:3 guarantee:1 quantitative:1 every:2 saxena:1 classifier:6 wrong:1 control:1 grant:4 local:4 pami:1 suggests:1 limited:1 camera:3 testing:1 parameteric:1 block:3 practice:1 differs:1 area:3 intersect:1 delage:1 significantly:1 reject:2 projection:6 matching:1 pre:1 altun:1 get:2 cannot:1 onto:3 tsochantaridis:2 context:11 map:21 center:7 layout:49 cluttered:2 convex:5 rectangular:4 recovery:2 rule:1 parameterizing:2 continued:1 handle:1 coordinate:1 autonomous:1 suppose:1 exact:1 programming:1 us:3 hypothesis:53 superpixel:1 recognition:4 approximated:1 expensive:1 predicts:4 labeled:6 muri:1 bottom:4 wang:2 region:16 sun:1 removed:2 highest:1 substantial:2 environment:2 dynamic:1 tight:1 segment:11 solving:2 serve:1 yuille:1 completely:3 joint:1 represented:2 couch:5 detected:1 outside:3 exhaustive:2 whose:1 richer:1 supplementary:1 cvpr:2 drawing:2 otherwise:3 jointly:1 final:1 propose:4 reconstruction:1 interaction:4 commitment:2 aligned:1 combining:1 date:1 poorly:1 description:2 bed:1 parent:1 empty:1 r1:1 produce:1 generating:4 object:117 tk:1 help:1 pose:1 augmenting:1 make3d:1 recovering:2 c:1 involves:1 come:1 predicted:5 implies:1 shadow:1 direction:3 correct:2 attribute:1 material:1 explains:1 require:1 fix:1 wall:28 around:1 considered:2 ground:5 normal:1 overlaid:1 predict:2 major:3 efros:2 early:2 adopt:1 estimation:5 proc:4 label:8 sensitive:1 concurrent:1 vice:1 create:1 mit:1 occupied:7 yo:3 joachim:1 check:4 contrast:1 baseline:1 inference:3 entire:4 koller:1 pixel:8 compatibility:5 classification:3 orientation:30 arg:1 augment:1 spatial:44 art:2 constrained:2 cube:4 extraction:4 having:1 sampling:1 ng:2 yu:2 thinking:1 guzman:1 report:1 simplify:1 employ:1 oriented:1 geometry:17 occlusion:1 attempt:2 freedom:1 detection:1 organization:1 interest:1 evaluation:3 violation:1 waltz:1 edge:4 partial:2 orthogonal:1 tree:5 penalizes:2 fitted:3 column:1 modeling:3 compass:2 shes:1 measuring:1 cost:3 vertex:1 subset:4 parametrically:1 too:1 synthetic:1 combined:2 confident:1 fundamental:1 twelve:1 lee:5 intersecting:1 thesis:1 ambiguity:1 corner:2 resort:1 return:1 includes:1 forsyth:2 satisfy:2 caused:1 performed:2 kwk:1 red:2 start:1 recover:2 kosecka:3 om:4 oi:1 yield:1 yellow:1 generalize:1 porous:1 bayesian:2 yr1:1 none:2 researcher:1 ago:1 classified:2 volumetric:38 evaluates:1 failure:3 associated:1 con:1 sampled:1 proved:1 dataset:3 color:3 improves:3 segmentation:3 actually:2 back:1 day:1 improved:1 done:2 box:8 evaluated:3 furthermore:1 until:1 hand:1 horizontal:2 yrn:1 validity:1 contain:1 hypothesized:1 hence:1 iteratively:1 semantic:1 illustrated:1 deal:1 width:2 ambiguous:1 complete:2 performs:1 reasoning:40 image:47 physical:3 overview:2 volume:18 belong:2 interpretation:2 mellon:1 measurement:1 significant:1 versa:1 similarly:1 surface:44 pu:1 dominant:2 exclusion:3 recent:7 scenario:1 n00014:1 onr:2 yri:3 yi:5 scoring:2 minimum:2 additional:3 floor:17 determine:1 ii:7 multiple:6 full:1 violate:1 technical:1 match:2 yoi:2 prediction:3 variant:1 vision:3 cmu:1 physically:2 iteration:1 represent:6 achieved:2 beam:5 addition:1 background:1 whereas:3 source:1 crucial:4 rest:2 unlike:1 probably:1 tend:1 incorporates:1 flow:1 extracting:5 presence:1 split:1 enough:4 concerned:1 variety:1 fit:4 whether:5 six:1 penalty:5 returned:1 ignored:1 clutter:15 generate:4 occupy:3 percentage:3 nsf:2 dotted:1 estimated:9 per:1 correctly:1 blue:1 carnegie:1 vol:1 key:1 four:1 utilize:1 year:1 parameterized:1 utilizes:1 missed:1 vex:1 incompatible:3 pushed:2 cyan:1 fold:1 quadratic:1 constraint:19 constrain:1 scene:65 ri:1 encodes:1 extremely:1 min:1 optical:1 gould:1 structured:9 combination:7 maxy:1 restricted:1 iccv:2 ceiling:7 computationally:1 mutually:2 slack:1 count:1 eventually:1 needed:1 available:1 junction:1 operation:1 yom:1 eight:1 apply:1 enforce:1 top:1 pushing:1 classical:1 threedimensional:1 society:1 malik:1 added:5 parametric:14 primary:1 exclusive:3 strategy:1 argue:1 reason:1 o1:1 modeled:1 relationship:3 difficult:1 robert:1 potentially:1 perform:4 disagree:1 vertical:3 datasets:1 finite:2 incorrectly:1 precise:1 rn:1 gc:3 sweeping:2 inferred:1 david:1 introduced:5 pair:11 required:1 kl:3 learned:1 able:1 perception:1 indoor:6 green:2 including:1 video:1 overlap:2 satisfaction:1 representing:1 improve:3 abhinav:1 martial:1 created:1 axis:1 extract:5 understanding:5 geometric:17 checking:1 acknowledgement:1 interdependent:1 relative:3 manhattan:3 loss:3 brittle:1 generation:2 proportional:2 degree:2 conveyed:1 row:2 prone:1 compatible:3 eccv:4 supported:2 last:1 free:4 hebert:5 allow:1 wide:1 fall:1 protruding:1 benefit:2 boundary:1 depth:3 dimension:1 world:9 valid:1 hedau:4 computes:1 evaluating:2 rich:1 made:1 coincide:1 projected:2 counted:1 keep:1 cuboid:13 containment:4 assumed:1 tuples:1 xi:3 discriminative:1 search:15 latent:1 decade:1 table:2 kanade:2 learn:2 inherently:1 improving:1 complex:1 domain:1 significance:1 bounding:1 child:1 body:1 aid:2 fails:1 position:1 inferring:1 explicit:2 exponential:1 lie:2 outdoor:1 candidate:1 perceptual:1 jmlr:1 late:1 showing:1 r2:1 explored:1 svm:1 gupta:3 evidence:5 grouping:1 workshop:1 adding:1 volumetrically:1 texture:1 phd:1 illustrates:1 push:2 margin:1 cviu:1 intersection:5 appearance:3 visual:4 contained:4 ordered:1 truth:4 satisfies:2 extracted:3 goal:9 marked:1 invalid:2 room:85 man:1 determined:2 wt:3 experimental:1 select:2 violated:2 incorporate:2 evaluate:6 avoiding:1 |
3,447 | 4,121 | Decoding Ipsilateral Finger Movements from ECoG
Signals in Humans
Yuzong Liu1 , Mohit Sharma2 , Charles M. Gaona2 , Jonathan D. Breshears3 , Jarod Roland 3 ,
Zachary V. Freudenburg1 , Kilian Q. Weinberger1 , and Eric C. Leuthardt2,3
1
Department of Computer Science and Engineering, Washington University in St. Louis
2
Department of Biomedical Engineering, Washington University in St. Louis
3
Department of Neurosurgery, Washington University School of Medicine
Abstract
Several motor related Brain Computer Interfaces (BCIs) have been developed over
the years that use activity decoded from the contralateral hemisphere to operate devices. Contralateral primary motor cortex is also the region most severely affected
by hemispheric stroke. Recent studies have identified ipsilateral cortical activity
in planning of motor movements and its potential implications for a stroke relevant BCI. The most fundamental functional loss after a hemispheric stroke is the
loss of fine motor control of the hand. Thus, whether ipsilateral cortex encodes
finger movements is critical to the potential feasibility of BCI approaches in the
future. This study uses ipsilateral cortical signals from humans (using ECoG) to
decode finger movements. We demonstrate, for the first time, successful finger
movement detection using machine learning algorithms. Our results show high
decoding accuracies in all cases which are always above chance. We also show
that significant accuracies can be achieved with the use of only a fraction of all the
features recorded and that these core features are consistent with previous physiological findings. The results of this study have substantial implications for advancing neuroprosthetic approaches to stroke populations not currently amenable
to existing BCI techniques.
1
Introduction
Note by authors after publication: The results in Figure 3 could not be reproduced in subsequent experiments and should be considered invalid. We apologize for this mishap. Other
results in this paper are not affected. The evolving understanding of motor function in the brain
has led to novel Brain Computer Interface (BCI) platforms that can potentially assist patients with
severe motor disabilities. A BCI is a device that can decode human intent from brain activity alone
in order to create an alternate communication and control channel for people with severe motor impairments [39]. This brain-derived control is dependent on the emerging understanding of cortical
physiology as it pertains to motor function. Examples are seen in the seminal discoveries by Georgopoulus and Schwartz that neurons in motor cortex show directional tuning and, when taken as a
population, can predict direction and speed of arm movements in monkey models [12, 19]. In the
subsequent two decades, these findings were translated to substantial levels of brain-derived control in monkey models and preliminary human clinical trials [14, 34]. Another example is seen in
Pfurtschellers work in analyzing electroencephalography (EEG). His group was one of the first to
describe the changes in amplitudes in sensorimotor rhythms associated with motor movement [24].
As a result, both Pfurtscheller and Wolpaw have used these signals to achieve basic levels of control
in humans with amyotrophic lateral sclerosis (ALS) and spinal cord injury [25, 40]. All these methods are based on a functioning motor cortex capable of controlling the contralateral limb. This is the
1
exact situation that does not exist in unilateral stroke. Hence, these systems to date offer little hope
for patients suffering from hemispheric stroke. For a BCI to assist a hemiparetic patient, the implant
will likely need to utilize unaffected cortex ipsilateral to the affected limb (opposite the side of the
stroke). To do so, an expanded understanding of how and to what degree of complexity motor and
motor associated cortex encodes ipsilateral hand movements is essential.
Electrocorticography (ECoG), or signal recorded from the surface of the brain, offers an excellent
opportunity to further define what level of motor information can be deciphered from human ipsilateral cortex related to movements (e.g. gross motor movements versus fine motor kinematics of
individual finger movements). The ECoG signal is more robust compared to the EEG signal: its
magnitude is typically five times larger, its spatial resolution as it relates to independent signals is
much greater (0.125 versus 3.0 cm for EEG), and its frequency bandwidth is significantly higher
(0-550 Hz versus 0- 40 Hz for EEG) [11, 30]. When analyzed on a functional level, many studies
have revealed that different frequency bandwidths carry highly specific and anatomically focal information about cortical processing. Thus far, however, no studies have utilized these ECoG spectral
features to definitively analyze and decode cortical processing of the specific kinematics of ipsilateral finger movements.
In the past year, the first demonstration of this concept of utilizing ipsilateral motor signals for
simple device control have been published both with ECoG (in healthy subjects) and MEG (in
stroke patients) [4, 38]. In this study we set out to further explore the decoding of individual finger
movements of the ipsilateral hand that could potentially be utilized for more sophisticated BCIs in
the future. We studied 3 subjects who required invasive monitoring for seizure localization. Each had
electrode arrays placed over the frontal lobe and a portion of sensorimotor cortex for approximately a
week. Each subject performed individual finger tasks and the concurrent ECoG signal was recorded
and analyzed. The principal results show that individual ipsilateral finger movements can be decoded
with high accuracy. Through machine learning techniques, our group was able to determine the
intent to flex and extend individual finger movements of the ipsilateral hand. These results indicate
that an ECoG based BCI platform could potentially operate a hand orthotic based on ipsilateral
motor signals. This could provide a neuroprosthetic alternative to patients with hemispheric stroke
who have otherwise failed non-invasive and medical rehabilitative techniques.
2
Data Collection
The subjects in this study were three patients (females; 8, 36, 48 years of age) with intractable
epilepsy who underwent temporary placement of intracranial electrode arrays to localize seizure foci
prior to surgical resection. All had normal levels of cognitive function and all were right-handed.
Subject 1 had a right hemispheric 8?8 grid while subjects 2 and 3 had left hemispheric 8?8 grids.
All gave informed consent. The study was approved by the Washington University Human Research
Protection Office.
Each subject sat in their hospital bed 75 cm from a 17-inch LCD video screen. In this study, the
subject wore a data glove on the each hand to precisely monitor finger movements. Each hand rested
on a table in front of the screen. The screen randomly cued the patient to flex and extend a given
finger (e.g., left index finger, right ring finger, etc.). A cue came up on the monitor and as long
as it was present, subjects would, at a self-paced speed, move the indicated finger from the flexed
to the extended position until the cue disappeared. They were instructed on the method prior to
participation. Each cued task period would last 2 seconds with a randomized rest period between
1.5 and 2.5 seconds(i.e., a trial). There were on average 30 trials per finger for a given subject.
For subject 1, the thumb data recording was found to be noisy and hence was eliminated from any
further analysis. Visual cues were presented using the BCI2000 program [27]. All motor hand
kinematics were monitored by the patient wearing a USB linked 5DT Data Glove 5 Ultras (Fifth
Dimension, Irvine, CA) on each hand. These data gloves are designed to measure finger flexure
with one sensor per finger at up to 8-bit flexure resolution. The implanted platinum electrode arrays
were 8?8 electrode arrays(Ad-Tech, Racine, WI and PMT, Chanhassen, MN). The grid and system
setup details are described elsewhere [38]. ECoG signals were acquired using BCI2000, stored,
and converted to MATLAB files for further processing and analysis. All electrodes were referenced
to an inactive intracranial electrode. The sampling frequency was 1200 Hz and data acquisition is
band-pass filtered from 0.15 to 500 Hz.
2
2.1
Data Preprocessing
Gabor Filter Analysis All ECoG data sets were visually inspected and re-referenced with respect to
the common average to account for any unwanted environmental noise. For these analyses, the timeseries ECoG data was converted into the frequency domain using a Gabor filter bank [17]. Spectral
amplitudes between 0 and 550 Hz were analyzed on a logarithmic scale. The finger positions from
the data glove were converted into velocities. These frequency responses and velocities were then
used as an input to machine learning algorithms described below. Inherent in this is the estimation
of the lag between the ECoG signal and the actual finger movement. As part of the modeling
process, the value of this variable which resulted in the best decoding accuracy was chosen for
further analysis. Average time lags were then used to align the ECoG signal to the finger movement
signal. Those features optimized for predicting individual finger movement were then reviewed in
light of anatomic location and spectral association in each subject.
Dimensionality Reduction Due to the high dimensionality of the spectral data (#channels(N ) ?
#f requencies(F )), it is important to reduce the dimensions in order to build a more conducive
machine learning algorithm. Principle component analysis, or PCA, is among the most popular
dimensionality reduction algorithm. PCA projects the original high-dimensional feature space into
a much lower principle subspace, such that the variance of low-dimensional data is maximized. In
the real-time decoding task, we use PCA to reduce the input data. However, in the weight analysis,
we preserve all the N ? F features because we want to study the effect of using all the features.
Electrode Co-Registration Radiographs were used to identify the stereotactic coordinates of each
grid electrode [10], and cortical areas were defined the GetLOC package for ECoG electrode localization [18]. Stereotactically defined electrodes were mapped to the standardized brain model. The
experimental results were then collated with these anatomical mapping data.
3
Algorithms
In this section, we describe the machine learning algorithms used for the finger movement decoding
tasks. We focus on three different settings: 1. binary classification, 2. multiclass classification and
3. multitask classification. All the data is split into a training and a testing dataset. We chose our
parameters based on a validation dataset split from the training dataset.
Binary Classification We treat the finger movement detection problem as a binary classification
setting. The data is presented as a time series with feature vector xt and velocity label yt at time t.
The goal is to predict if at time t, a finger is moving (yt = 1) or not (yt = ?1).
For this purpose, we adapted logistic regression (LR) [26] and binary support vector machines
(SVM) [7]. Both classifiers learn parameters (w, b) ? Rd ? R. The prediction at time t is computed
as y?t = sign(w> xt + b). The vector w is learned with the following optimization problem
min
(w,b)
T
X
L(w> xt + b, yt ) + ?|w|q .
(1)
t=1
Here, ? ? 0 is the regularization constant that trades off weight sparsity with complexity. The norm
of the regularization can be the `1 norm (q = 1) or the `2 norm (q = 2). The `1 norm has the
tendency to result in sparse classifiers which assign non-zero weights to only a small subset of the
available features. This allows us to infer which brain regions and frequencies are most important
for accurate predictions. The `2 norm tends to yield slightly better classification results (and is easier
to optimize) but is not as interpretable as it typically assigns small weights to many features. The
loss functions L differ for the two above mentioned algorithms. We will denote the loss function for
logistic regression as Llr and for SVMs as Lsvm . The exact definitions are:
Llr (z, y) = log(1 + exp(?yz))
Lsvm (z, y) = max(1 ? yz, 0)
(2)
Multiclass Classification A second setting of interest is the differentiation of fingers. Here we do
not want to predict if a finger is moving but which one. Consequently, at any time point t we could
have one of K possible labels, such as ?Index Finger? (yt = 1), ?Ring Finger? (yt = 2), etc. We
adopt the Crammer and Singer multi-class adaptation of support vector machines (MCSVM) [8].
For each class k ? {1, . . . , K}, we learn class-specific parameters wk , bk . The loss only focuses on
3
pairwise comparisons between the different classes and ensures that wk> xt + bk ? wr> xt + br + 1
if yt = k for any r 6= k. For completeness, we re-state the optimization problem:
T X
X
min
(w1 ,b1 ),...,(wK ,bK )
max(1 +
wrT xt
+ br ?
t=1 r6=yt
(wyTt xyt
+ byt ), 0) + ?
K
X
|wk |q .
(3)
k=1
Similar to the scenario of binary classification, the constant ? ? 0 regulates the trade-off between
complexity and sparseness.
Multitask Learning In the movement detection setting, each finger is learned as an independent
classification problem. In the finger discrimination setting, we actively discriminate between the
individual fingers. Multitask learning (MTL) is a way to combine the binary finger movement
detection problems by learning them jointly [5]. In the setting of brain decoding, it seems reasonable
to assume that there are certain features which are associated with the general cortical processing of
finger movements. This is analogous to the notion of language processing and articulation in cortical
areas. Functional magnetic resonance imaging (fMRI) studies have shown that although speech is
represented in general cortical areas, individual features specific to different kinds of words can
be found [16, 23]. We adopt the MTL adaptation for SVMs of [9], and an analogous framework
for logistic regression, which leverages the commonalities across learning tasks by modeling them
explicitly with an additional shared weight vector w0 . The prediction at time t for finger k is defined
as y?t = (w0 + wk )> xt . The corresponding optimization problem becomes
min
w0 ,w1 ,...,wK
?0 |w0 | +
K X
T
X
L((w0 + wk )> xt , yt ) + ?k |wk |q .
(4)
k=0 t=1
The parameter ?0 regulates how much of the learning is shared. If ?0 ? +?, then w0 = 0 and we
reduce our setting to the original binary classification mentioned above. On the other hand, setting
?0 = 0 and ?k>0 0 will result in weight vectors wk>0 = 0. As a result, one would learn only a
single classifier with weight vector w0 for generic finger movement.
4
Results
In this section we evaluate our algorithms for ipsilateral decoding on three subjects. First, we approximate the time-lag between ECoG signal and finger movement, then we present decoding results
on finger movement detection, discrimination and also joint decoding of all fingers in one hand.
Area Under the Curve
Time Lag We first study the effects of
1
decoding time lag between cortical signal
and movement using features. The decod0.8
ing accuracy is computed by shifting the
feature dataset xt and the target dataset yt
0.6
by a presumed number of sample points
(i.e. we are evaluating the performance of
0.4
decoder h: h(xt ) = yt+?T , by increasing
Index
the value of ?T ). The best time lag is
Middle
0.2
Ring
selected as the value of ?T which leads
Little
to best decoding accuracy. Figure 1
Average
0
shows the decoding accuracy as a function
0
150
300
450
600
750
Time Lag (ms)
of time-lag for four individual finger
movements in Subject 1. Offsets between Figure 1: Decoding time lag for ipsilateral finger move0 and 800 ms are tested for all fingers and ment in Subject 1. The x-axis is the presumed time lag
an average offset time is computed. The ?T (ms) between input feature vectors and target labels,
average time lag for the ipsilateral finger and the y-axis is the area under the ROC curve commovement for Subject 1 is observed to be puted from L1-regularized logistic regression model.
around 158 ms. This is in accordance with The bold black line is the average AUC, and the best
previous studies by our group which show decoding time-lag is indicated by the black dotted line.
similar time lags between cortical activity
and actual movements [38]. All further analysis is based on cortical activity (features) shifted
relative to movement by the average time-lag reported here.
4
1
0.8
0.8
0.8
0.6
Index
Middle
Ring
Little
0.4
0.2
0
0
0.2
0.4
0.6
0.8
False Positive Rate
(a) Subject 1
1
True Positive Rate
1
True Positive Rate
True Positive Rate
1
0.6
Thumb
Index
Middle
Ring
Little
0.4
0.2
0
0
0.2
0.4
0.6
0.8
False Positive Rate
(b) Subject 2
1
0.6
Thumb
Index
Middle
Ring
Little
0.4
0.2
0
0
0.2
0.4
0.6
0.8
False Positive Rate
1
(c) Subject 3
Figure 2: ROC curve for the ipsilateral finger movement decoder. Horizontal axis shows the false
positive rate, and the vertical axis shows the true positive rate. The dotted line is the accuracy of a
random classifier. Classifiers that have higher area under the ROC curve, or AUC, indicate better
classification performance.
Detecting Finger Movement We characterize the movement detection task as a binary classification. We first set a threshold thresh, and label the targets yt as 1 if the velocity at time t vt ? thresh,
and -1 otherwise. Then, we use `1-regularized logistic regression for the binary classification. We
use receiver operating characteristic (ROC) curve to evaluate the performance of the binary classification. ROC curve is widely used in signal estimation and detection theory, and is a graphical
plot of true positive rate versus the false positive rate. ROC analysis allows user to pick the optimal discrimination threshold for the binary classifier. We pick regularizer ? from validation dataset.
Figure 2 shows the result of ROC curve for three subjects. This demonstrates that `1-regularized
logistic regression is a powerful tool in detecting finger movement.
Finger Discrimination In this section, we study how to discriminate which finger has made the
movement. We first extract the sample points of which the finger is moving from the time-series.
We then apply multiclass SVM to do the classification. The result is shown as the confusion
matrices in Figure 3, and the colorbar shows the accuracy. Each row of the matrix represents the
finger that actually moved and each column represents predicted finger. The elements of the matrix
shows the percentage of all movements of a particular finger that has been classified as particular
predicted finger. Note that the accuracy by a random multiclass classifier is 1/(number of fingers).
It can be concluded that the ECoG signal contains useful information to discriminate individual
finger movement.
!"#$%&'()
!"#$%&'()
Predicted Movement
!"#$%&'()
Actual Movement
Figure 3: Note from authors after publication: Results in this figure are invalid (see note in
introduction).Confusion matrix of finger movement multiclass classification. The rows are the
actual movement, and the columns are the predicted movement.
4.1
Learning Commonality from the Brain Activity
In this section, we present how multitask learning improves the performance of the classifier. Although multitask learning has been employed in the context of brain signal decoding [2], we are the
first to decode ECoG signals in humans. We group all the individual finger movement together, such
that each task has similarity with others. First of all, we evaluate the performance of single-task
5
learning using SVM. Then, we study the SVM-based multitask learning. As we show in Equation 4,
we make trade-off between modeling joint component and and modeling class-specific components
by adjusting parameters ?0 and ?. We search a number of regularization constant (?0 , ?), and pick
up the parameters that lead to highest average AUC for all tasks. Table 1 shows the comparison
of SVM-based single task learning and multitask learning. Here we evaluate the multitask learning
algorithm based on the improvement of (1-AUC); (1-AUC) stands for the area above the curve. The
average improvement of the decoder for three patients is 25.53%, 5.60%, and 18.57%, respectively.
This confirms our assumption that there exists brain activity that controls the finger movement, irrespective of any particular finger. By carefully searching the best parameters that regulates the
trade-off between learning commonality among all finger movement and specificity of exact finger
movement, the classification algorithm can be significantly improved. We also compare the `1/`2regularized logistic regression-based multitask learning with SVM-based multitask learning. There
is an improvement on (1-AUC) for logistic regression-based multitask learning. Again, it illustrates
that multitask learning is particularly helpful in learning similar tasks that are controlled by the brain.
However, we prefer SVM-based multitask learning because of the larger improvement.
AUC
Thumb
Index
Middle
Ring
Little
Subject 1
STL
MTL
N/A
N/A
0.8477 0.8494
0.8393 0.8569
0.8000 0.8561
0.7425 0.7865
Subject 2
STL
MTL
0.7710 0.7845
0.9061 0.8948
0.9021 0.8990
0.8888 0.8894
0.7124 0.7586
Subject 3
STL
MTL
0.7680 0.8611
0.7454 0.8242
0.9459 0.9481
0.7404 0.7479
0.7705 0.7801
Table 1: Comparison of SVM-based single-task learning (STL) and SVM-based multi-task learning
(MTL). The parameters are chosen from validation dataset: ?0 = 10?2 and ? = 104 for Subject 1,
?0 = 1 and ? = 102 for Subject 2, and ?0 = 102 and ? = 10?2 for Subject 3. The best decoding
performance is indicated in bold.
5
Weight Analysis
An important part of decoding finger movements from cortical activity is to map the features back
to cortical domain. Physiologically, it is important to understand the features which contribute most
to the decoding algorithms i.e. the features with the highest weights. As shown in Table 2 below, the
decoding accuracy, indicated by AUC, does not change much as we increase the number of features
used for classification. This signifies that from the large feature set used for decoding, a few features
form the core and are the most important. To visualize these core features, we mapped the top 30
features back to the brain. Figure 4 above shows the normalized weights from the features used to
classify finger movements from non-movements. It is apparent from the figure that the features with
the highest weights fall in the DLPFC and premotor areas. This is what we would expect since these
two areas are the one?s most involved in the planning of motor movements. As previously reported,
the frequency range with the highest weights falls in the lower frequencies in ipsilateral movements
[38]. In our case, the frequencies fall in the delta-alpha range. As noted by Tallon-Baudry, attention
networks of the brain affect the oscillatory synchrony as low as theta-alpha range frequencies [31].
# features
AUC
1
0.681
2
0.717
4
0.755
8
0.787
16
0.803
32
0.807
64
0.807
256
0.807
4096
0.808
Table 2: The area under the curve (AUC) as a function of the number of features used for classification. Features were selected in decreasing order of their respective absolute weights from logistic
regression with `1 regularization.
6
Subject 1
Subject 2
Subject 3
Figure 4: Brain map representing the weights of the top 30 features of the three subjects. It represents
the variability in cortical processing of ipsilateral finger movements. It can also be seen that cortical
processing occurs as a network involving dorsolateral prefrontal cortex, pre-motor and motor areas.
The frequency range for these features is in the delta and alpha range i.e. the low frequency range.
6
Discussion
The notion that motor cortex plays a role in ipsilateral body movements was first asserted by NybergHansen et al. that 15% of corticospinal neurons did not decussate in cats [22]. Originally this was felt
to represent more axial motor control. Further studies in single-neuron recordings in monkey models
extended this observation to include ipsilateral hand and finger function. Tanji et al. demonstrated
that a small percentage of primary motor cortical neurons showed increased activity with ipsilateral
hand movements [32]. This site was found to be anatomically distinct from contralateral hand sites
and, when stimulated, produced ipsilateral hand movements [1]. Additionally, a larger subset of
premotor neurons was found to demonstrate more robust activations with cues to initiate movement
during both ipsilateral and contralateral movements than with primary motor sites [3, 6]. These
findings in animal models support the conclusion that a small percent of motor and a larger percent
of premotor cortex participate in control of ipsilateral limb and hand movements.
In humans, there appears to be a dichotomy in how motor regions contribute depending on whether
the primary or non-primary motor cortex is examined. Using fMRI Newton et al. demonstrated that
there was a negative change from baseline in fMRI bold sequence in M1 associated with ipsilateral
movements and postulated this to represent increased inhibition [21]. Verstynen et al., however,
recently published contrasting results. Their group showed that anatomically distinct primary motor
sites demonstrated increased activation that became more pronounced during the execution of complex movements [36]. The role that premotor cortex plays appears to be distinct from that of primary
motor cortex. In normal subjects, fMRI shows that there is more robust bilateral activation of the
dorsal premotor cortex with either contralateral or ipsilateral hand movements [15]. The findings
by Huang, et al. (2004) demonstrated that ipsilateral premotor areas have magnetoencephalography (MEG) dipole peak latencies that significantly precede contralateral M1 sensorimotor cortex
in performing unilateral finger movements. Using electroencephalography (EEG), ipsilateral hand
movements have been shown to induce alteration in cortical potentials prior to movement; this is referred to as premotor positivity [33, 29]. Spectral analyses of EEG signals have shown bihemispheric
low-frequency responses with various finger and hand movements. Utilizing electrocorticography
(ECoG), Wisneski et al more definitively demonstrated that the cortical physiology associated with
ipsilateral hand movements was associated with lower frequency spectral changes, an earlier timing,
and premotor predominant cortical localization, when compared to cortical physiology that was associated with contralateral hand movements [38]. Taken together, these findings support more of a
motor planning role, rather than execution role, in ipsilateral hand actions.
Decoding the information present in the ECoG signal with regard to ipsilateral finger movements is
important in defining the potential use of BCI methodologies for patients with hemispheric dysfunction due to stroke or trauma. If high resolution motor kinematics can be decoded from the ECoG
signal (e.g. individual finger flexion and extension), a BCI platform could potentially be created
to restore function to a stroke induced paretic hand. Since up to one-half of hemispheric stroke
7
patients are chronically left with permanent loss of function in their affected hand, this could have
substantial clinical impact [20]. Functional imaging has shown these severely affected patients to
have increased activity in the premotor regions of their unaffected hemispheres [28, 37]. The exact
role this activity plays is still unclear. It may simply be an indicator of a more severe outcome [35]
or an adapative mechanism to optimize an already poor situation [13]. Thus, incomplete recovery
and its association with heightened ipsilateral activation may reflect the up-regulation of motor planning with an inability to execute or actuate the selected motor choice. In this situation, a BCI may
provide a unique opportunity to aid in actuating the nascent premotor commands. By decoding the
brain signals associated with a given motor intention, the BCI may then convert these signals into
commands that could control a robotic assist device that would allow for improved hand function
(i.e., a robotic glove that opens and closes the hand or a functional electrical simulator that operates
the nerves and muscles of the hand). The BCI would allow the ipsilateral premotor cortex to bypass
the physiological bottleneck determined by injured and dysfunctional contralateral primary cortex
(due to stroke) and the small and variable percentage of uncrossed motor fibers from ipsilateral M1.
This new methodology would allow for restoration of function in chronically and severely affected
subjects for whom methods of rehabilitation have not accomplished a sufficiently recovery.
7
Conclusion
To our knowledge, this work describes the first instance of successful detection of individual finger
movements from human ipsilateral ECoG signals. In this paper, we present a general decoding
framework using the following algorithms: (1) `1-regularized logistic regression for detecting finger
movement; (2) Multiclass support vector machines to discriminate between fingers; and (3) First
demonstration of multitask learning into the ECoG signal to improve decoding accuracy. The results
presented here suggest that there exists information on the cortex ipsilateral to the moving fingers
which can be decoded with high accuracy using machine learning algorithms. These results present
a great potential in the world of neuroprosthetics and BCI. For patients suffering from stroke and
hemiparesis, decoding finger movements from the unaffected hemisphere can be of tremendous help.
Our future goals involve simultaneous decoding of finger and arm movements (using standard center
out joystick task) from both ipsilateral and contralateral hemispheres. Another important goal is the
real-time use of these decoding results and demonstrate their utility in the world of BCI.
References
[1] H. Aizawa, H. Mushiake, M. Inase, and J. Tanji. An output zone of the monkey primary motor cortex specialized for bilateral hand
movement. Experimental Brain Research, 82(1):219?221, 1990.
[2] M. Alamgir, M. Grosse-Wentrup, and Y. Altun. Multitask learning for brain-computer interfaces. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 9:17?24, 2010.
[3] C. Brinkman and R. Porter. Supplementary motor area in the monkey: activity of neurons during performance of a learned motor task.
Journal of Neurophysiology, 42(3):681, 1979.
[4] E. Buch, C. Weber, L. Cohen, C. Braun, M. Dimyan, T. Ard, J. Mellinger, A. Caria, S. Soekadar, A. Fourkas, et al. Think to move: a
neuromagnetic brain-computer interface (BCI) system for chronic stroke. Stroke, 39(3):910, 2008.
[5] R. Caruana. Multitask learning. Machine learning, 28:41?75, 1997.
[6] P. Cisek, D. Crammond, and J. Kalaska. Neural activity in primary motor and dorsal premotor cortex in reaching tasks with the contralateral versus ipsilateral arm. Journal of neurophysiology, 89(2):922, 2003.
[7] C. Cortes and V. Vapnik. Support-vector networks. Machine learning, 20(3):273?297, 1995.
[8] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine
Learning Research, 2:265?292, 2002.
[9] T. Evgeniou and M. Pontil. Regularized multi?task learning. In KDD, pages 109?117, 2004.
[10] P. Fox, J. Perlmutter, and M. Raichle. A stereotactic method of anatomical localization for positron emission tomography. Journal of
Computer Assisted Tomography, 9(1):141, 1985.
[11] W. Freeman, M. Holmes, B. Burke, and S. Vanhatalo. Spatial spectra of scalp eeg and emg from awake humans. Clinical Neurophysiology, 114(6):1053?1068, 2003.
[12] A. Georgopoulos, J. Kalaska, R. Caminiti, and J. Massey. On the relations between the direction of two-dimensional arm movements
and cell discharge in primate motor cortex. Journal of Neuroscience, 2(11):1527, 1982.
[13] C. Gerloff, K. Bushara, A. Sailer, E. Wassermann, R. Chen, T. Matsuoka, D. Waldvogel, G. Wittenberg, K. Ishii, L. Cohen, et al.
Multimodal imaging of brain reorganization in motor areas of the contralesional hemisphere of well recovered patients after capsular
stroke. Brain, 129(3):791, 2006.
[14] L. Hochberg, M. Serruya, G. Friehs, J. Mukand, M. Saleh, A. Caplan, A. Branner, D. Chen, R. Penn, and J. Donoghue. Neuronal
ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442(7099):164?171, 2006.
8
[15] H. Johansen-Berg, M. Rushworth, M. Bogdanovic, U. Kischka, S. Wimalaratna, and P. Matthews. The role of ipsilateral premotor cortex
in hand movement after stroke. Proceedings of the National Academy of Sciences, 99(22):14518, 2002.
[16] M. Just, V. Cherkassky, S. Aryal, and T. Mitchell. A neurosemantic theory of concrete noun representation based on the underlying brain
codes. 2010.
[17] E. Leuthardt, Z. Freudenberg, D. Bundy, and J. Roland. Microscale recording from human motor cortex: implications for minimally
invasive electrocorticographic brain-computer interfaces. Journal of Neurosurgery: Pediatrics, 27(1), 2009.
[18] K. Miller, S. Makeig, A. Hebb, R. Rao, M. Dennijs, and J. Ojemann. Cortical electrode localization from x-rays and simple mapping for
electrocorticographic research: The. Journal of neuroscience methods, 162(1-2):303?308, 2007.
[19] D. Moran and A. Schwartz. Motor cortical representation of speed and direction during reaching. Journal of Neurophysiology,
82(5):2676, 1999.
[20] H. Nakayama, H. J?rgensen, H. Raaschou, and T. Olsen. Recovery of upper extremity function in stroke patients: the copenhagen stroke
study. Archives of physical medicine and rehabilitation, 75(4):394, 1994.
[21] J. Newton, A. Sunderland, and P. Gowland. fmri signal decreases in ipsilateral primary motor cortex during unilateral hand movements
are related to duration and side of movement. Neuroimage, 24(4):1080?1087, 2005.
[22] R. Nyberg-Hansen and A. Brodal. Sites of termination of corticospinal fibers in the cat. an experimental study with silver impregnation
methods. The Journal of Comparative Neurology, 120(3):369?391, 2004.
[23] S. Petersen, P. Fox, M. Posner, M. Mintum, and M. Raichle. Positron emission tomographic studies of the cortical anatomy of single-word
processing. Cognitive psychology: key readings, page 109, 2004.
[24] G. Pfurtscheller and A. Aranibar. Event-related cortical desynchronization detected by power measurements of scalp EEG* 1. Electroencephalography and Clinical Neurophysiology, 42(6):817?826, 1977.
[25] G. Pfurtscheller, C. Guger, G. Muller, G. Krausz, and C. Neuper. Brain oscillations control hand orthosis in a tetraplegic. Neuroscience
letters, 292(3):211?214, 2000.
[26] S. Ryali and V. Menon. Feature selection and classification of fmri data using logistic regression with l1 norm regularization. NeuroImage,
47:S57, 2009.
[27] G. Schalk, D. McFarland, T. Hinterberger, N. Birbaumer, and J. Wolpaw. Bci2000: a general-purpose brain-computer interface system.
IEEE Transactions on Biomedical Engineering, 51(6):1034?1043, 2004.
[28] R. Seitz, P. Hoflich, F. Binkofski, L. Tellmann, H. Herzog, and H. Freund. Role of the premotor cortex in recovery from middle cerebral
artery infarction. Archives of neurology, 55(8):1081, 1998.
[29] H. Shibasaki and M. Kato. Movement-associated cortical potentials with unilateral and bilateral simultaneous hand movement. Journal
of Neurology, 208(3):191?199, 1975.
[30] R. Srinivasan, P. Nunez, R. Silberstein, E. Inc, and O. Eugene. Spatial filtering and neocortical dynamics: estimates of eeg coherence.
IEEE Transactions on Biomedical Engineering, 45(7):814?826, 1998.
[31] C. Tallon-Baudry. Oscillatory synchrony and human visual cognition. Journal of Physiology-Paris, 97(2-3):355?363, 2003.
[32] J. Tanji, K. Okano, and K. Sato. Neuronal activity in cortical motor areas related to ipsilateral, contralateral, and bilateral digit movements
of the monkey. Journal of neurophysiology, 60(1):325, 1988.
[33] I. Tarkka and M. Hallett. Cortical topography of premotor and motor potentials preceding self-paced, voluntary movement of dominant
and non-dominant hands. Electroencephalography and Clinical Neurophysiology, 75(1-2):36?43, 1990.
[34] D. Taylor and A. Schwartz. Direct cortical control of 3d neuroprosthetic devices. Aug. 17 2004. US Patent App. 10/495,207.
[35] A. Turton, S. Wroe, N. Trepte, C. Fraser, and R. Lemon. Contralateral and ipsilateral emg responses to transcranial magnetic stimulation
during recovery of arm and hand function after stroke. Electroencephalography and Clinical Neurophysiology/Electromyography and
Motor Control, 101(4):316?328, 1996.
[36] T. Verstynen, J. Diedrichsen, N. Albert, P. Aparicio, and R. Ivry. Ipsilateral motor cortex activity during unimanual hand movements
relates to task complexity. Journal of Neurophysiology, 93(3):1209, 2005.
[37] C. Weiller, F. Chollet, K. Friston, R. Wise, and R. Frackowiak. Functional reorganization of the brain in recovery from striatocapsular
infarction in man. Annals of Neurology, 31(5):463?472, 2004.
[38] K. Wisneski, N. Anderson, G. Schalk, M. Smyth, D. Moran, and E. Leuthardt. Unique cortical physiology associated with ipsilateral
hand movements and neuroprosthetic implications. Stroke, 39(12):3351, 2008.
[39] J. Wolpaw, N. Birbaumer, D. McFarland, G. Pfurtscheller, and T. Vaughan. Brain-computer interfaces for communication and control.
Clinical neurophysiology, 113(6):767?791, 2002.
[40] J. Wolpaw and D. McFarland. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans.
Proceedings of the National Academy of Sciences of the United States of America, 101(51):17849, 2004.
9
| 4121 |@word multitask:16 trial:3 neurophysiology:10 middle:6 norm:6 approved:1 seems:1 open:1 termination:1 confirms:1 vanhatalo:1 seitz:1 lobe:1 pick:3 carry:1 reduction:2 series:2 contains:1 united:1 past:1 existing:1 recovered:1 protection:1 activation:4 tetraplegic:1 subsequent:2 kdd:1 motor:50 designed:1 interpretable:1 plot:1 discrimination:4 alone:1 cue:4 selected:3 device:6 half:1 intelligence:1 wassermann:1 positron:2 core:3 lr:1 filtered:1 completeness:1 detecting:3 contribute:2 location:1 five:1 direct:1 combine:1 ray:1 pairwise:1 acquired:1 mohit:1 presumed:2 planning:4 multi:3 brain:30 simulator:1 mukand:1 freeman:1 decreasing:1 little:6 actual:4 electroencephalography:5 increasing:1 becomes:1 project:1 underlying:1 what:3 cm:2 kind:1 emerging:1 monkey:6 developed:1 informed:1 contrasting:1 finding:5 differentiation:1 pediatrics:1 braun:1 unwanted:1 makeig:1 classifier:8 demonstrates:1 schwartz:3 control:16 medical:1 penn:1 louis:2 positive:10 engineering:4 referenced:2 treat:1 tends:1 accordance:1 severely:3 timing:1 analyzing:1 colorbar:1 extremity:1 approximately:1 black:2 chose:1 minimally:1 studied:1 examined:1 corticospinal:2 co:1 range:6 unique:2 flex:2 testing:1 wolpaw:4 digit:1 pontil:1 area:15 evolving:1 physiology:5 significantly:3 gabor:2 word:2 pre:1 induce:1 specificity:1 intention:1 suggest:1 altun:1 petersen:1 caminiti:1 close:1 selection:1 context:1 seminal:1 vaughan:1 optimize:2 map:2 demonstrated:5 yt:12 center:1 chronic:1 attention:1 duration:1 resolution:3 recovery:6 assigns:1 dipole:1 holmes:1 amyotrophic:1 utilizing:2 array:4 collated:1 his:1 posner:1 population:2 searching:1 notion:2 coordinate:1 alamgir:1 analogous:2 discharge:1 annals:1 controlling:1 inspected:1 target:3 decode:4 exact:4 user:1 play:3 us:1 heightened:1 smyth:1 velocity:4 element:1 particularly:1 utilized:2 electrocorticographic:2 observed:1 role:7 electrical:1 region:4 cord:1 ensures:1 kilian:1 wentrup:1 movement:85 trade:4 highest:4 decrease:1 substantial:3 gross:1 mentioned:2 complexity:4 ojemann:1 neuromagnetic:1 electrocorticography:2 lcd:1 dynamic:1 localization:5 eric:1 flexed:1 translated:1 multimodal:1 joint:2 frackowiak:1 joystick:1 represented:1 cat:2 finger:75 regularizer:1 various:1 fiber:2 america:1 distinct:3 describe:2 artificial:1 dichotomy:1 detected:1 outcome:1 apparent:1 lag:14 larger:4 widely:1 premotor:15 supplementary:1 otherwise:2 nyberg:1 bci:15 statistic:1 think:1 jointly:1 noisy:1 reproduced:1 pmt:1 sequence:1 ment:1 adaptation:2 relevant:1 kato:1 date:1 consent:1 achieve:1 academy:2 bed:1 moved:1 pronounced:1 artery:1 guger:1 electrode:11 comparative:1 disappeared:1 ring:7 silver:1 cued:2 depending:1 help:1 axial:1 ard:1 school:1 diedrichsen:1 aug:1 predicted:4 indicate:2 differ:1 direction:3 anatomy:1 deciphered:1 filter:2 human:15 assign:1 preliminary:1 ecog:22 extension:1 assisted:1 burke:1 sufficiently:1 considered:1 around:1 normal:2 visually:1 exp:1 great:1 mapping:2 predict:3 week:1 visualize:1 matthew:1 cognition:1 algorithmic:1 commonality:3 adopt:2 purpose:2 estimation:2 precede:1 label:4 currently:1 hansen:1 healthy:1 platinum:1 concurrent:1 create:1 mishap:1 tool:1 hope:1 neurosurgery:2 sensor:1 always:1 rather:1 reaching:2 command:2 publication:2 office:1 derived:2 focus:3 emission:2 improvement:4 wittenberg:1 tech:1 ishii:1 baseline:1 caplan:1 helpful:1 dependent:1 typically:2 relation:1 raichle:2 sunderland:1 among:2 classification:20 resonance:1 platform:3 animal:1 spatial:3 noun:1 evgeniou:1 washington:4 eliminated:1 sampling:1 represents:3 future:3 fmri:6 others:1 dlpfc:1 inherent:1 few:1 randomly:1 preserve:1 resulted:1 national:2 individual:13 baudry:2 resection:1 detection:8 interest:1 highly:1 severe:3 predominant:1 analyzed:3 llr:2 light:1 asserted:1 implication:4 amenable:1 accurate:1 capable:1 respective:1 fox:2 incomplete:1 taylor:1 re:2 bci2000:3 handed:1 column:2 modeling:4 classify:1 increased:4 earlier:1 injury:1 instance:1 rao:1 caruana:1 restoration:1 signifies:1 subset:2 contralateral:13 successful:2 front:1 characterize:1 stored:1 reported:2 emg:2 st:2 fundamental:1 randomized:1 peak:1 international:1 off:4 decoding:28 together:2 concrete:1 w1:2 again:1 reflect:1 recorded:3 huang:1 prefrontal:1 positivity:1 hinterberger:1 silberstein:1 cognitive:2 actively:1 account:1 potential:7 converted:3 alteration:1 bold:3 wk:9 inc:1 postulated:1 permanent:1 explicitly:1 ad:1 radiograph:1 performed:1 bilateral:4 analyze:1 liu1:1 portion:1 linked:1 rushworth:1 synchrony:2 accuracy:13 became:1 variance:1 who:3 characteristic:1 miller:1 maximized:1 identify:1 yield:1 surgical:1 directional:1 inch:1 ensemble:1 thumb:4 produced:1 monitoring:1 usb:1 published:2 unaffected:3 classified:1 stroke:22 app:1 oscillatory:2 simultaneous:2 definition:1 sensorimotor:3 frequency:14 acquisition:1 invasive:3 involved:1 associated:10 monitored:1 irvine:1 dataset:7 adjusting:1 popular:1 mitchell:1 knowledge:1 dimensionality:3 improves:1 injured:1 amplitude:2 sophisticated:1 actually:1 carefully:1 back:2 appears:2 nerve:1 friehs:1 higher:2 dt:1 originally:1 mtl:6 methodology:2 response:3 improved:2 execute:1 anderson:1 just:1 biomedical:3 until:1 hand:36 horizontal:1 porter:1 matsuoka:1 logistic:11 indicated:4 bcis:2 puted:1 menon:1 effect:2 concept:1 true:5 functioning:1 normalized:1 ivry:1 hence:2 regularization:5 branner:1 during:7 self:2 dysfunction:1 auc:10 noted:1 rhythm:1 hemispheric:8 m:4 neocortical:1 demonstrate:3 confusion:2 l1:2 interface:8 tetraplegia:1 percent:2 weber:1 wise:1 novel:1 recently:1 charles:1 common:1 specialized:1 stimulation:1 functional:6 physical:1 regulates:3 patent:1 spinal:1 cohen:2 birbaumer:2 cerebral:1 extend:2 anatomic:1 association:2 m1:3 epilepsy:1 significant:1 measurement:1 tuning:1 rd:1 focal:1 grid:4 language:1 had:4 moving:4 cortex:27 surface:1 operating:1 etc:2 align:1 similarity:1 inhibition:1 dominant:2 intracranial:2 recent:1 female:1 thresh:2 showed:2 hemisphere:5 scenario:1 certain:1 binary:11 came:1 vt:1 accomplished:1 muller:1 muscle:1 transcranial:1 seen:3 greater:1 additional:1 preceding:1 employed:1 determine:1 period:2 signal:29 relates:2 infer:1 conducive:1 ing:1 clinical:7 offer:2 long:1 kalaska:2 roland:2 fraser:1 feasibility:1 controlled:1 prediction:3 involving:1 basic:1 regression:11 implanted:1 patient:15 impact:1 nunez:1 albert:1 represent:2 kernel:1 serruya:1 achieved:1 cell:1 want:2 fine:2 thirteenth:1 concluded:1 operate:2 rest:1 archive:2 file:1 hz:5 subject:32 recording:3 induced:1 rested:1 leverage:1 revealed:1 split:2 affect:1 gave:1 psychology:1 identified:1 opposite:1 bandwidth:2 reduce:3 multiclass:7 br:2 donoghue:1 inactive:1 whether:2 bottleneck:1 pca:3 utility:1 assist:3 unilateral:4 speech:1 trauma:1 action:1 matlab:1 impairment:1 useful:1 latency:1 involve:1 band:1 tomography:2 svms:2 hallett:1 exist:1 percentage:3 shifted:1 dotted:2 sign:1 delta:2 neuroscience:3 per:2 ipsilateral:45 wr:1 anatomical:2 affected:6 srinivasan:1 group:5 key:1 four:1 threshold:2 monitor:2 localize:1 registration:1 utilize:1 advancing:1 imaging:3 massey:1 chollet:1 fraction:1 year:3 convert:1 package:1 letter:1 powerful:1 nascent:1 reasonable:1 tomographic:1 oscillation:1 lsvm:2 coherence:1 prefer:1 seizure:2 dorsolateral:1 uncrossed:1 bit:1 hochberg:1 paced:2 activity:15 scalp:2 adapted:1 sato:1 placement:1 precisely:1 georgopoulus:1 lemon:1 awake:1 georgopoulos:1 encodes:2 prosthetic:1 felt:1 speed:3 tanji:3 min:3 performing:1 expanded:1 flexion:1 department:3 alternate:1 poor:1 sclerosis:1 across:1 slightly:1 describes:1 wi:1 infarction:2 rehabilitation:2 primate:1 anatomically:3 taken:2 equation:1 previously:1 kinematics:4 mechanism:1 singer:2 wrt:1 initiate:1 available:1 aizawa:1 apply:1 limb:3 spectral:6 generic:1 magnetic:2 alternative:1 original:2 standardized:1 top:2 include:1 graphical:1 opportunity:2 schalk:2 newton:2 medicine:2 build:1 yz:2 move:2 already:1 occurs:1 primary:11 rgensen:1 disability:1 unclear:1 subspace:1 mapped:2 lateral:1 decoder:3 w0:7 participate:1 whom:1 meg:2 code:1 reorganization:2 index:7 demonstration:2 setup:1 regulation:1 potentially:4 racine:1 negative:1 intent:2 implementation:1 upper:1 vertical:1 neuron:6 observation:1 timeseries:1 voluntary:1 situation:3 extended:2 communication:2 variability:1 defining:1 verstynen:2 bk:3 copenhagen:1 required:1 paris:1 optimized:1 johansen:1 learned:3 temporary:1 tremendous:1 herzog:1 able:1 mcfarland:3 below:2 articulation:1 sparsity:1 reading:1 program:1 max:2 video:1 shifting:1 power:1 critical:1 event:1 friston:1 regularized:6 participation:1 predicting:1 restore:1 indicator:1 brinkman:1 arm:5 mn:1 representing:1 improve:1 theta:1 mellinger:1 axis:4 irrespective:1 created:1 extract:1 mcsvm:1 prior:3 understanding:3 discovery:1 eugene:1 relative:1 freund:1 loss:6 expect:1 topography:1 filtering:1 versus:5 age:1 validation:3 degree:1 consistent:1 principle:2 bank:1 wearing:1 bypass:1 row:2 elsewhere:1 placed:1 last:1 side:2 allow:3 understand:1 wore:1 fall:3 underwent:1 dysfunctional:1 definitively:2 fifth:1 sparse:1 absolute:1 regard:1 curve:9 noninvasive:1 dimension:2 zachary:1 cortical:30 neuroprosthetic:4 evaluating:1 stand:1 world:2 author:2 collection:1 instructed:1 preprocessing:1 made:1 far:1 transaction:2 approximate:1 alpha:3 olsen:1 robotic:2 sat:1 b1:1 receiver:1 chronically:2 ryali:1 neurology:4 spectrum:1 search:1 physiologically:1 decade:1 table:5 stimulated:1 additionally:1 nature:1 reviewed:1 channel:2 learn:3 robust:3 nakayama:1 ca:1 eeg:9 excellent:1 complex:1 perlmutter:1 domain:2 did:1 noise:1 suffering:2 body:1 neuronal:2 site:5 referred:1 roc:7 screen:3 byt:1 grosse:1 hebb:1 aid:1 pfurtscheller:4 neuroimage:2 position:2 decoded:4 leuthardt:2 xyt:1 r6:1 specific:5 xt:10 moran:2 offset:2 desynchronization:1 physiological:2 svm:9 cortes:1 stl:4 essential:1 intractable:1 exists:2 false:5 vapnik:1 magnitude:1 execution:2 implant:1 illustrates:1 sparseness:1 chen:2 easier:1 cherkassky:1 led:1 logarithmic:1 simply:1 likely:1 explore:1 visual:2 failed:1 chance:1 environmental:1 saleh:1 goal:3 magnetoencephalography:1 consequently:1 invalid:2 shared:2 man:1 change:4 glove:5 determined:1 operates:1 crammond:1 principal:1 hospital:1 pas:1 discriminate:4 experimental:3 tendency:1 neuper:1 zone:1 brodal:1 berg:1 people:1 support:6 crammer:2 jonathan:1 pertains:1 dorsal:2 frontal:1 inability:1 evaluate:4 tested:1 |
3,448 | 4,122 | Unsupervised Kernel Dimension Reduction
Meihong Wang
Dept. of Computer Science
U. of Southern California
Los Angeles, CA 90089
[email protected]
Fei Sha
Dept. of Computer Science
U. of Southern California
Los Angeles, CA 90089
[email protected]
Michael I. Jordan
Dept. of Statistics
U. of California
Berkeley, CA
[email protected]
Abstract
We apply the framework of kernel dimension reduction, originally designed for
supervised problems, to unsupervised dimensionality reduction. In this framework, kernel-based measures of independence are used to derive low-dimensional
representations that maximally capture information in covariates in order to predict responses. We extend this idea and develop similarly motivated measures
for unsupervised problems where covariates and responses are the same. Our
empirical studies show that the resulting compact representation yields meaningful and appealing visualization and clustering of data. Furthermore, when used
in conjunction with supervised learners for classification, our methods lead to
lower classification errors than state-of-the-art methods, especially when embedding data in spaces of very few dimensions.
1 Introduction
Dimensionality reduction is an important aspect of many statistical learning tasks. In unsupervised
dimensionality reduction, the primary interest is to preserve significant properties of the data in a
low-dimensional representation. Well-known examples of this theme include principal component
analysis, manifold learning algorithms and their many variants [1?4].
In supervised dimensionality reduction, side information is available to influence the choice of the
low-dimensional space. For instance, in regression problems, we are interested in jointly discovering
a low-dimensional representation Z of the covariates X and predicting well the response variable
Y given Z. A classical example is Fisher discriminant analysis for binary response variables, which
projects X to a one-dimensional line. For more complicated cases, however, one needs to specify
a suitable regression function, E [Y | X ], in order to identify Z. This is often a challenging task in
itself, especially for high-dimensional covariates. Furthermore, one can even argue that this task is
cyclically dependent on identifying Z, as one of the motivations for identifying Z is that we would
hope that the low-dimensional representation can guide us in selecting a good regression function.
To address this dilemma, there has been a growing interest in sufficient dimension reduction (SDR)
and related techniques [5?8]. SDR seeks a low-dimensional Z which captures all the dependency
between X and Y . This is ensured by requiring conditional independence among the three variables; i.e., X ?
? Y | Z. Several classical approaches exist to identify such random vectors Z [6, 9].
Recently, kernel methods have been adapted to this purpose. In particular, kernel dimensional reduction (KDR) develops a kernel-based contrast function that measures the degree of conditional independence [7]. Compared to classical techniques, KDR has the significant advantage that it avoids
making strong assumptions about the distribution of X. Therefore, KDR has been found especially
suitable for high-dimensional problems in machine learning and computer vision [8, 10, 11].
In this paper we show how the KDR framework can be used in the setting of unsupervised learning.
Our idea is similar in spirit to a classical idea from the neural network literature: we construct
1
an ?autoencoder? or ?information bottleneck? where the response variables are the same as the
covariates [12, 13]. The key difference is that autoencoders in the neural network literature were
based on a specific parametric regression function. By exploiting the SDR and KDR frameworks,
on the other hand, we can cast the unsupervised learning problem within a general nonparametric
framework involving conditional independence, and in particular as one of optimizing kernel-based
measures of independence.
We refer to this approach as ?unsupervised kernel dimensionality reduction? (UKDR). As we will
show in an empirical investigation, the UKDR approach works well in practice, comparing favorably
to other techniques for unsupervised dimension reduction. We assess this via visualization and via
building classifiers on the compact representations delivered by these methods. We also provide
some interesting analytical links of the UKDR approach to stochastic neighbor embedding (SNE)
and t-distributed SNE (t-SNE) [14, 15].
The paper is organized as follows. In Section 2, we review the SDR framework and discuss how
kernels can be used to solve the SDR problem. Additionally, we describe two specific kernelbased measures of independences, elucidating a relationship between these measures. We show
how the kernel-based approach can be used for unsupervised dimensionality reduction in Section 3.
We report empirical studies in Section 4. Finally, we conclude and comment on possible future
directions in Section 5.
Notation Random variables are denoted with upper-case characters such as X and Y . To refer to
their specific values, if vectorial, we use bold lower-case such as x and xn . xi stands for the i-th
element of x. Matrices are in bold upper-case such as M .
2 Sufficient dimension reduction and measures of independence with kernels
Discovering statistical (in)dependencies among random variables is a classical problem in statistics;
examples of standard measures include Spearman?s ?, Kendall?s ? and Pearson?s ?2 tests. Recently,
there have been a growing interest in computing measures of independence in Reproducing Kernel
Hilbert spaces (RKHSs) [7, 16]. Kernel-based (and other nonparametric) methods detect nonlinear
dependence in random variables without assuming specific relationships among them. In particular,
the resulting independence measures attain minimum values when random variables are independent. These methods were originally developed in the context of independent component analysis [17] and have found applications in a variety of other problems, including clustering, feature
selection, and dimensionality reduction [7, 8, 18?21].
We will be applying these approaches to unsupervised dimensionality reduction. Our proposed
techniques aim to yield low-dimensional representation which is ?maximally? dependent on the
original high-dimensional inputs?this will be made precise in a later section. To this end, we first
describe briefly kernel-based measures of (conditional) independence, focusing on how they are
applied to supervised dimensionality reduction.
2.1 Kernel dimension reduction for supervised learning
In supervised dimensionality reduction for classification and regression, the response variable, Y ?
Y, provides side information about the covariates, X ? X . In a basic version of this problem we
seek a linear projection B ? RD?M to project X from D-dimensional space to a M -dimensional
subspace. We would like the low-dimensional coordinates Z = B ?X to be as predictive about
Y as X is; i.e., E [Y | B ?X ] = E [Y | X ]. Intuitively, knowing Z is sufficient for the purpose of
regressing Y .
This problem is referred to as sufficient dimension reduction (SDR) in statistics, where it has been
the subject of a large literature [22]. In particular, SDR seeks a projection B such that,
X?
? Y | B ?X ,
subject to B ?B = I .
(1)
where I is the M ?M identity matrix. Several methods have been proposed to estimate B [6, 9]. Of
special interest is the technique of kernel dimensional reduction (KDR) that is based on assessing
conditional independence in RKHS spaces [7]. Concretely, we map the two variables X and Y
to the RKHS spaces F and G induced by two positive semidefinite kernels KX : X ? X ? R
2
and KY : Y ? Y ? R. For any function g ? G, there exists a conditional covariance operator
CY Y |X : G ? G such that
hg, CY Y |X giG = E varY |X [g(Y )|X]
(2)
calculates the residual errors of predicting g(Y ) with X [7, Proposition 3]. Similarly, we can define
the conditional covariance operator CYBY |X for predicting with B ?X.
The conditional covariance operator has an important property: for any projection B, CYBY |X ?
CY Y |X where the (partial) order is defined in terms of the trace operator. Moreover, the equality
holds if and only if eq. (1) is satisfied. This gives rise to the possibility of using the trace of the
operators as a contrast function to estimate B.
Concretely, with N samples drawn from P (X, Y ), we compute the corresponding kernel matrices
KB ?X and KY . We centralize them with a projection matrix H = I ? 1/N 11?, where 1 ? RN
be the vector whose elements are all ones. The trace of the estimated conditional variance operator
CYBY |X is then defined as follows:
J?Y Y |X (B ?X, Y ) = Trace GY (GB ?X + N ?N IN )?1 ,
(3)
where GY = HKY H and GB ?X = HKB ?X H. ?N is a regularizer,
smoothing the kernel
?
matrix. It should be chosen such that when N ? +?, ?N ? 0 and N ?N ? +? to ensure consistency [7]. The minimizer of the conditional independence measure yields the optimal projection
B for kernel dimensionality reduction:
BY Y |X = arg minB ?B=I J?Y Y |X (B ?X, Y ).
(4)
We defer discussion on choosing kernels as well as numerical optimization to later sections. When
it is clear from context, we use J?Y Y |X as a shorthand for J?Y Y |X (B ?X, Y ).
The optimization functional in eq. (3) is not the only way to implement the KDR idea. Indeed,
another kernel-based measure of independence that can be optimized in the KDR context is the
Hilbert-Schmidt Independence Criterion (HSIC) [16]. This is built as the Hilbert-Schmidt norm of
the cross-covariance operator CXY , defined as G ? F :
cov(f, g) = hf, CXY giF = EXY {[f (X) ? EX f (X)] [g(Y ) ? EY g(Y )]} ,
(5)
where the expectations are taken with respect to the joint distribution and the two marginals respectively. It has been shown that for universal kernels such as Gaussian kernels the Hilbert-Schmidt
norm of CXY is zero if and only if X and Y are independent [16]. Given N samples from P (X, Y ),
the empirical estimate of HSIC is given by (up to a multiplicative constant):
J?XY (X, Y ) = Trace [HKX HKY ] ,
(6)
where KX and KY are RN ?N kernel matrices computed over X and Y respectively. To apply
this independence measure to dimensionality reduction, we seek a projection B which maximizes
J?XY (B ?X, Y ), such that the low-dimensional coordinates Z = B ?X are maximally correlated
with X,
BXY = arg maxB ?B=I J?XY (B ?X, Y ) = arg maxB ?B=I Trace [HKB ?X HKY ] .
(7)
It is interesting to note that the independence measures in eq. (3) and eq. (6) are similar. In fact,
we have been able to find conditions under which they are equivalent, as stated in the following
proposition.
Proposition 1. Let N ? +? and ?N ? 0. Additionally, assume that the samples are distributed
uniformly on the unit sphere. If ?N ? ?2N , then up to a constant,
J?Y Y |X (B ?X, Y ) ? ?c0 N 2 ?2N J?XY (B ?X, Y ).
(8)
Therefore, under these conditions it is equivalent to minimize J?Y Y |X (B ?X, Y ) or to maximize
J?XY (B ?X, Y ). Thus, BXY ? BY Y |X .
3
Proof The proof is sketched in the supplementary material. Note that assuming the norm of X is
equal to one is not overly restrictive; in practice, one often needs to normalize data points to control
the overall scale.
We note that while the two measures are asymptotically equivalent, they have different computational complexity?computing J?XY does not involve matrix inversion. Furthermore, J?XY is slightly
easier to use in practice as it does not depend on regularization parameters to smooth the kernel matrices.
The HSIC measure J?XY is also closely related to the technique of kernel alignment which minimizes
the angles between (vectorized) kernel matrices KX and KY [23]. This is equivalent to maximizing
Trace[KX KY ]/(kKX |kF kKY kF ). The alignment technique has been used for clustering data X
by assigning cluster labels Y so that the two kernel matrices are maximally aligned. The HSIC
measure has also been used for similar tasks [18]. While both J?Y Y |X and J?XY have been used
for supervised dimensionality reduction with known values of Y , they have not yet been applied to
unsupervised dimensionality reduction, which is the direction that we pursue here.
3 Unsupervised kernel dimension reduction
In unsupervised dimensionality reduction, the low-dimensional representation Z can be viewed as
a compression of X. The goal is to identify the Z that captures as much of the information in X
as possible. This desideratum has been pursued in the neural network literature where autoencoders
learn a pair of encoding and decoding functions, Z = f (X) and X = g(Z). A drawback of this
approach is that f and g need to be specified a priori, in terms of number of layers and neurons in
neural nets.
Can we leverage the advantages of SDR and KDR to identify Z without specifying f (X) or g(Z)?
In this section, we describe how this can be done, viewing unsupervised dimensionality reduction as
a special type of supervised regression problem. We start by considering the simplest case where Z
is a linear projection of X. We then consider nonlinear approaches.
3.1 Linear unsupervised kernel dimension reduction
? = f (B ?X) where X
?
Given a random variable X ? RD , we consider the regression problem X
?
M
is a copy of X and Z = B X ? R is the low-dimensional representation of X. Following the
? | B ?X. Such B ?X thus
framework of SDR and KDR in section 2, we seek B such that X ?
?X
?
captures all information in X in order to construct itself (i.e., X).
With a set of N samples from P (X), the linear projection B can be identified as the minimizer of
the following kernel-based measure of independence
(9)
min
J?XX|B ?X = Trace GX (GB ?X + N ?N I)?1 ,
B ?B=I
where GX and GB ?X are centralized kernel matrices of KX and KB ?X respectively. We can
alternatively maximize the corresponding HSIC measure of dependence between B ?X and X
max
B ?B=I
J?B ?X X = Trace [GX GB ?X ].
(10)
We refer collectively to this kernel-based dimension reduction method as linear unsupervised KDR
? ?X, X) as a shorthand for the independence measure to be either mini(UKDR) and we use J(B
mized or maximized.
3.2 Nonlinear unsupervised kernel dimension reduction
For data with complicated multimodal distributions, linear transformation of the inputs X is unlikely
to be sufficiently flexible to reveal useful structures. For example, linear projections can result in
overlapping clusters in low-dimensional spaces. For the purpose of better data visualization and
exploratory data analysis, we describe several simple yet effective nonlinear extensions to linear
UKDR. The main idea is to find a linear subspace embedding of nonlinearly transformed X. Let
4
h(X) ? RH denote the nonlinear transformation. The projection B is then computed to optimize
? ?h(X), X).
J(B
Radial Basis Network (RBN). In the spirit of neural network autoencoder, one obvious choice of
h(X) is to use a network of radial basis functions (RBFs). In this case, H = N , the number of
samples from X. For a sample xi , the n-th component of h(xi ) is given by
hRBN
(xi ) = exp{?kxi ? xn k2 /?n2 },
n
(11)
where xn is the center of the n-th RBF and ?n is the corresponding bandwidth.
Random Sparse Feature (RSF). In this approach we draw D ? H elements of W from a multivariate Gaussian distribution with zero mean and identity covariance matrix. We construct the k-th
element of h(X) as
hRSF
(X) = Heaviside(wk ?X ? b),
(12)
k
where wk is the k-th row of W and b is an adjustable offset term. Heaviside(t) is the step function
that takes the value of 1 when t > 0 and the value of 0 otherwise. Note that b controls the sparsity
of hRSF (X), a property that can be computationally advantageous.
Our choice of random matrix W is motivated by earlier work in neural networks with infinite number of hidden units, and recent work in large-scale kernel machines and deep learning kernels [24?
26]. In particular, in the limit of H ? +?, the transformed X induces an RKHS space with the
arccos kernel: hRSF (u)?hRSF (v) = 1 ? 1/? cos?1 (u?v/kukkvk) [26].
Nonparametric. We have also experimented with a setup where Z is not constrained to any para? X) over all possible values Z ? RM . While more
metric form. In particular, we optimize J(Z,
powerful in principle than either linear KDR or the RBF or RSF variants of nonlinear KDR, we have
found that empirically that the optimization can get stuck in local optima. However, when initialized
with the solutions from the other nonlinear methods, the final solution is generally better.
3.3 Choice of kernels
? ?X, X) are defined via kernels over B ?X and X. A natural
The independence measures J(B
choice is a universal kernel, in particular the Gaussian kernel: KB ?X (xi , xj ) = exp{?kB ?xi ?
2
B ?xj k2 /?B
}, and similarly for X with a different bandwidth ?X . We have also experimented with
other types of kernels; in particular we have found the following kernels to be of particular interest.
Random walk kernel over X. Given N observations, {x1 , x2 , . . . , xN }, we note that the RBN
transformed xi in eq. (11), when properly normalized, can be seen as the probability of random
walk from xi to xj ,
X
pij = P (xi ? xj ) = exp{?kxi ? xj k2 /?i2 } /
exp{?kxi ? xj k2 /?i2 }.
(13)
j6=i
The matrix P with elements of pij is clearly not symmetric and not positive semidefinite. Nevertheless, a simple transformation KX = P P ?turns it into a positive semidefinite
Pkernel. Intuitively,
the values of pij describe local structures around xi [14]. Thus KX (xi , xj ) = k pik pjk measures
the similarity between xi and xj in terms of these local structures.
Cauchy kernel for B ?X. A Cauchy kernel is a positive semidefinite kernel and is given by
C(u, v) = 1/ 1 + ku ? vk2 = exp ? log(1 + ku ? vk2 ) .
?
(14)
?
We define KB ?X (xi , xj ) = C(B xi , B xj ). Intuitively, the Cauchy kernel can be viewed as a
Gaussian kernel in the transformed space ?(B ?X) such that ?(xi )??(xj ) = C(xi , xj ) [27].
These two types of kernels are closely related to t-distributed stochastic neighbor embedding (tSNE), a state-of-the-art technique for dimensionality reduction [15]. We discuss the link in the
Supplementary Material.
3.4 Numerical optimization
We apply gradient-based techniques (with line search) to optimize either independence measure. The techniques constrain the projection matrix B to lie on the Grassman-Stiefel manifold
5
150
0.15
100
0.1
10
5
0.05
50
0
0
0
?0.05
?50
?0.1
?100
?150
0
(a)
?5
?0.15
100
200
?0.2
0
300
(b)
100
200
(c)
300
?10
0
100
200
300
(d)
Figure 1: Experiments with synthetic 2D data. (a). Original. (b) 1D embedding by t-SNE. (c) and
(d) are 1D embeddings by UKDR. They differ in terms of how the embeddings are constrained (see
text for details). Vertical axes are the coordinates of 1D embeddings. t-SNE failed to separate data.
UKDR makes fewer mistakes in (c) and no mistakes in (d).
B ?B = I [28]. While the optimization is nonconvex, our optimization algorithm works quite well
in practice.
The complexity of computing gradients is quadratic in the number of data points as the kernel matrix needs to be computed. Standard tricks?such as chunking?for handling large kernel matrices
apply, though our empirical work has not used them. In order to optimize on the Stiefel manifold,
computing the search direction from the gradient needs a QR decomposition which depends cubicly on D, the original dimensionality. More efficient implementation can bring the complexity to
quadratic on D and linearly on M , the dimensionality of the low-dimensional space. One simple
strategy is to use PCA as a preprocessing step to obtain a moderate D.
4 Experiments
We compare the performance of our proposed methods for unsupervised kernel dimension reduction
(UKDR) to a state-of-the-art method, specifically t-distributed stochastic neighbor embedding (tSNE) [15]. t-SNE has been shown to excel in many tasks of data visualization and clustering. In
addition to visual examination of 2D embedding quality, we also investigate the performance of the
resulting low-dimensional representations in classification. In all of the experiments reported in this
section, we have used the independence measure J?B ?X X (B ?X, X) of eq. (10).
4.1 Synthetic example
Our synthetic example contains 300 data points randomly distributed on two rings, shown in
Fig. 1(a). We use t-SNE and our proposed method to yield 1D embeddings of these data points,
plotted in Fig. 1(b)?1(d). The horizontal axis indexes the data points where the first 100 indices
correspond to the inner ring.
Fig. 1(b) plots a typical embedding by t-SNE where we see that there is significant overlap between the clusters. On the other hand, UKDR is able to generate less overlapped or non-overlapped
clusters. In Fig. 1(c), the embedding is computed as the linear projection of the RBN-transformed
original data. In Fig. 1(d), the embedding is unconstrained and free to take any value on 1D axis,
corresponding to the ?nonparametric embedding? presented in section 3.
4.2 Images of handwritten digits
Our second data set is a set of 2007 images of USPS handwritten digits [20]. Each image has 256
pixels and is thus represented as a point in R256 . We refer to this data set as ?USPS-2007.? We also
sampled a subset of 500 images, 100 each from the digits 1, 2, 3, 4 and 5. Note that images of digit
3 and 5 are often indistinguishable from each other. We refer to this dataset as ?USPS-500.?
USPS-500. Fig. 2 displays a 2D embedding of the 500 images. The colors encode digit categories
(which are used only for visualization). The first row was generated with kernel PCA, Laplacian
eigenmaps and t-SNE. t-SNE clearly outperforms the other two in yielding well-separated clusters.
6
The second row was generated with our UKDR method with Gaussian kernels for both the lowdimensional coordinates Z and X. The difference between the three embeddings is whether Z is
constrained as a linear projection of the original X (linear UKDR), an RBN-transformed X (RBN
UKDR), or a Random Sparse Feature transform of X (RSF UKDR). The Gaussian kernel bandwidths over Z were 0.1, 0.02 and 0.5, respectively. For the RBN transformation of X, we selected
the bandwidth of each RBF function in eq. (11) with the ?perplexity trick? used in SNE and tSNE [15]. The bandwidth for the Gaussian kernel over X was 0.5 for all three plots. While linear
UKDR yields reasonably good clusters of the data, RBN UKDR and RSF UKDR yield significantly
improved clusterings. Indeed, the quality of the embeddings is on par with that of t-SNE.
In the third row of the figure, the embedding Z is constrained to be RSF UKDR. However, instead
of using Gaussian kernels (as in the second row), we have used Cauchy kernels. The kernels over X
are Gaussian, Random Walk, and Diffusion Map kernels [29], respectively. In general, contrasting
to embeddings in the second row, using a Cauchy kernel for the embedding space Z leads to tighter
clusters. Additionally, the embeddings by the diffusion map kernel is the most visually appealing
one, outperforming t-SNE by significantly increasing the gap of digit 1 and 4 from the others.
2.5
1.5
1
0.08
0.06
0.5
0.04
0
0.02
?0.5
0
?1
?0.02
?1.5
?0.04
?2
?6
?4
1
2
3
4
5
0.1
?2
0
2
4
(a) Kernel PCA
?0.06
?0.04
1
2
3
4
5
40
0.12
1
2
3
4
5
2
20
0
?20
?0.02
0
0.02
0.04
0.06
0.08
0.1
?40
?50
0
(b) Laplacian eigenmap
0.6
2
1
2
3
4
5
0.5
0.4
50
100
(c) t-SNE
2.5
1
2
3
4
5
2
1.5
1.5
1
0.3
1
0.2
1
2
3
4
5
0.5
0.1
0
0
?0.1
?0.4
?0.2
0
0.2
0.4
0.6
?0.5
?1
(d) Linear UKDR
0
1
2
0.5
0
3
?0.5
?1
(e) RBN UKDR
1.5
?1
2
3
4
2.5
1
2
3
4
5
2
0
1
(f) RSF UKDR
2.5
1
0
1
2
3
4
5
2
1.5
1
1
0.5
0.5
?2
0
1
2
3
4
5
?3
?4
?5
?4
?2
0
2
4
(g) Gaussian+Cauchy
0
?0.5
?0.5
?1
?1
6
?1.5
?2
?1.5
?1
0
1
2
3
4
5
(h) Random Walk+Cauchy
?2
?2
?1
0
1
2
3
4
5
(i) Diffusion+Cauchy
Figure 2: 2D embedding results for the USPS-500 dataset by existing approaches, shown in the first
row. Embeddings by UKDR are shown in the bottom two panels.
Effect of sparsity. For RSF features computed with eq. (12), the offset constant b can be used to
obtain control over the sparsity of the feature vectors. We investigated the effect of the sparsity
level on embeddings. We found that a sparsity level as high as 82% still generates reasonable
embeddings. Details are reported in the Supplementary Material. Thus RSF features are viable
options for handling high-dimensional data for nonlinear UKDR.
USPS-2007: visualization and classification. In Fig. 3, we compare the embeddings of t-SNE and
unsupervised KDR on the full USPS 2007 data set. The data set has many easily confusable pairs
of images. Both t-SNE and unsupervised KDR lead to visually appealing clustering of data. In the
UKDR framework, using an RBN transformation to parameterize the embedding performs slightly
better than using the RSF transformation.
7
M
UKDR
t-SNE
PCA
2
11.1
19.8
49.3
3
11.6
16.8
42.2
5
9.6
19.3
21.5
10
9.5
8.4
10.03
20
8.8
8.2
6.7
50
7.8
8.1
6.6
Table 1: Classification errors on the USPS-2007 data set with different dimensionality reduction
techniques.
Finally, as another way to assess the quality of the low-dimensional embeddings discovered by these
methods, we used these embeddings as inputs to supervised classifiers. The classifier we used was
the large-margin nearest-neighbor classifier of [30]. We split the 2007 images into 70% for training
and 30% for testing and reporting classification errors. We repeated the random split 50 times and
report averaged errors. The results are displayed in table 1 where PCA acts as a baseline. There are
several notable findings. First, with very few dimensions (up to and including 5), our UKDR method
outperforms both t-SNE and PCA significantly. As the dimensionality goes up, t-SNE starts to
perform better than our method but only marginally. PCA is expected to perform well with very high
dimensionality as it recovers pairwise distances the best. The superior classification performance by
our method is highly desirable when the target dimensionality is very much constrained.
1
0.06
0.02
1
3
2
2
4
1
5
0
6
?0.02
7
?0.04
8
9
?0.06
?0.05
0
(a) RBN UKDR
100
3
2
0.04
0.05 0
3
50
4
0
5
?1
6
7
?2
0
?50
8
?3
?4
?5
9
0
(b) RSF UKDR
5
10
?100
?100
?50
0
50
1
2
3
4
5
6
7
8
9
1000
(c) t-SNE
Figure 3: Embeddings of the USPS-2007 data set by our nonlinear UKDR approach and by t-SNE.
Both methods separate all classes reasonably well. However, using these embeddings as inputs to
classifiers suggests that the embedding by nonlinear UKDR is of higher quality.
5 Conclusions
We propose a novel technique for unsupervised dimensionality reduction. Our approach is based on
kernel dimension reduction. The algorithm identifies low-dimensional representations of input data
by optimizing independence measures computed in a reproducing kernel Hilbert space. We study
empirically and contrast the performance of our method to that of state-of-the-art approaches. We
show that our method yield meaningful and appealing clustering patterns of data. When used for
classification, it also leads to significantly lower misclassification.
Acknowledgements
This work is partially supported by NSF Grant IIS-0957742 and DARPA N10AP20019. F.S. also
benefited from discussions with J.P. Zhang, under Fudan University Key Laboratory Senior Visiting
Scholar Program.
References
[1] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290:2323, 2000.
[2] J. B. Tenenbaum, V. Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality
reduction. Science, 290:2319, 2000.
[3] C. M. Bishop, M. Svens?en, and C. K. I. Williams. GTM: the generative topographic mapping. Neural
Computation, 10:215?234, 1998.
8
[4] N. D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In
Advances in Neural Information Processing Systems 16, pages 329?336. MIT Press, 2004.
[5] R. D. Cook and X. Yin. Dimension reduction and visualization in discriminant analysis (with discussion).
Australian & New Zealand Journal of Statistics, 43:147?199, 2001.
[6] K. C. Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86:316?327, 1991.
[7] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel dimension reduction in regression. The Annals of
Statistics, 37:1871?1905, 2009.
[8] J. Nilsson, F. Sha, and M. I. Jordan. Regression on manifolds using kernel dimension reduction. In
Proceedings of the 24th International Conference on Machine Learning, pages 697?704. ACM, 2007.
[9] K.-C. Li. On principal Hessian directions for data visualization and dimension reduction: another application of Stein?s lemma. Journal of the American Statistical Association, 86:316?342, 1992.
[10] A. Shyr, R. Urtasun, and M. I. Jordan. Sufficient dimensionality reduction for visual sequence classification. In Proceedings of Twenty-third IEEE Conference on Computer Vision and Pattern Recognition,
pages 3610?3617, 2010.
[11] Q. Wu, S. Mukherjee, and F. Liang. Localized sliced inverse regression. In Advances in Neural Information Processing Systems 21, pages 1785?1792. MIT Press, 2009.
[12] C. M. Bishop et al. Pattern recognition and machine learning. Springer New York, 2006.
[13] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Proceedings of the 37th
Annual Allerton Conference on Communication, Control, and Computing, pages 368?377, 1999.
[14] G. Hinton and S. Roweis. Stochastic neighbor embedding. Advances in Neural Information Processing
Systems 15, pages 857?864, 2003.
[15] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. The Journal of Machine Learning
Research, 9:2579?2605, 2008.
[16] A. Gretton, R. Herbrich, A. Smola, O. Bousquet, and B. Sch?olkopf. Kernel methods for measuring
independence. The Journal of Machine Learning Research, 6:2075?2129, 2005.
[17] F. R. Bach and M. I. Jordan. Kernel independent component analysis. The Journal of Machine Learning
Research, 3:1?48, 2003.
[18] L. Song, A. Smola, A. Gretton, and K. M. Borgwardt. A dependence maximization view of clustering. In
Proceedings of the 24th International Conference on Machine Learning, pages 815?822. ACM, 2007.
[19] L. Song, A. Smola, A. Gretton, K. M. Borgwardt, and J. Bedo. Supervised feature selection via dependence estimation. In Proceedings of the 24th International Conference on Machine Learning, pages
823?830. ACM, 2007.
[20] L. Song, A. Smola, K. Borgwardt, and A. Gretton. Colored maximum variance unfolding. Advances in
Neural Information Processing Systems 20, pages 1385?1392, 2008.
[21] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. The Journal of Machine Learning Research, 5:73?99, 2004.
[22] K. P. Adragni and R. D. Cook. Sufficient dimension reduction and prediction in regression. Philosophical
Transactions A, 367:4385?4405, 2009.
[23] N., J. Kandola, A. Elisseeff, and J. Shawe-Taylor. On kernel-target alignment. In Advances in Neural
Information Processing Systems 14, pages 367?373. MIT Press, 2002.
[24] C. K. I. Williams. Computation with infinite neural networks. Neural Computation, 10:1203?1216, 1998.
[25] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural
Information Processing Systems 20, pages 1177?1184. MIT Press, 2008.
[26] Y. Cho and L. Saul. Kernel methods for deep learning. In Advances in Neural Information Processing
Systems 22, pages 342?350. MIT Press, 2009.
[27] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups. Springer Verlag, 1984.
[28] A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints.
SIAM J. Matrix Anal. Appl, 20:303?353, 1998.
[29] B. Nadler, S. Lafon, R. Coifman, and I. G. Kevrekidis. Diffusion maps, spectral clustering and eigenfunctions of Fokker-Planck operators. In Advances in Neural Information Processing Systems 18, pages
955?962. MIT Press, 2005.
[30] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification.
The Journal of Machine Learning Research, 10:207?244, 2009.
9
| 4122 |@word version:1 briefly:1 inversion:1 compression:1 norm:3 advantageous:1 c0:1 seek:5 covariance:5 decomposition:1 elisseeff:1 reduction:44 contains:1 selecting:1 rkhs:3 outperforms:2 existing:1 comparing:1 exy:1 assigning:1 yet:2 numerical:2 designed:1 plot:2 pursued:1 discovering:2 fewer:1 selected:1 generative:1 cook:2 smith:1 colored:1 provides:1 gx:3 allerton:1 herbrich:1 zhang:1 viable:1 edelman:1 shorthand:2 coifman:1 pairwise:1 expected:1 indeed:2 growing:2 considering:1 increasing:1 project:2 xx:1 notation:1 moreover:1 maximizes:1 panel:1 fudan:1 kevrekidis:1 gif:1 minimizes:1 pursue:1 developed:1 contrasting:1 finding:1 transformation:6 berkeley:2 act:1 bedo:1 ensured:1 classifier:5 k2:4 rm:1 control:4 unit:2 grant:1 planck:1 positive:4 local:3 limit:1 mistake:2 encoding:1 specifying:1 challenging:1 suggests:1 co:1 appl:1 averaged:1 testing:1 practice:4 implement:1 digit:6 empirical:5 universal:2 attain:1 significantly:4 projection:13 radial:2 get:1 selection:2 operator:8 context:3 influence:1 applying:1 optimize:4 equivalent:4 map:4 center:1 maximizing:1 go:1 williams:2 zealand:1 identifying:2 embedding:19 exploratory:1 coordinate:4 hsic:5 annals:1 target:2 trick:2 element:5 overlapped:2 recognition:2 mukherjee:1 bottom:1 wang:1 capture:4 parameterize:1 cy:3 complexity:3 covariates:6 depend:1 predictive:1 dilemma:1 centralize:1 learner:1 basis:2 usps:9 multimodal:1 joint:1 easily:1 darpa:1 represented:1 gtm:1 regularizer:1 separated:1 describe:5 effective:1 pearson:1 choosing:1 hkb:2 whose:1 quite:1 supplementary:3 solve:1 otherwise:1 statistic:5 cov:1 topographic:1 jointly:1 itself:2 kdr:15 delivered:1 final:1 transform:1 advantage:2 sequence:1 analytical:1 net:1 propose:1 lowdimensional:1 aligned:1 roweis:2 normalize:1 ky:5 qr:1 los:2 olkopf:1 exploiting:1 cluster:7 optimum:1 assessing:1 ring:2 derive:1 develop:1 nearest:2 eq:8 strong:1 c:1 australian:1 differ:1 direction:4 closely:2 drawback:1 stochastic:4 kb:5 viewing:1 material:3 pjk:1 scholar:1 investigation:1 proposition:3 tighter:1 extension:1 hold:1 sufficiently:1 around:1 exp:5 visually:2 lawrence:1 mapping:1 predict:1 nadler:1 vary:1 purpose:3 estimation:1 label:1 unfolding:1 hope:1 fukumizu:2 mit:6 feisha:1 clearly:2 gaussian:11 aim:1 conjunction:1 encode:1 ax:1 properly:1 contrast:3 hkx:1 baseline:1 detect:1 vk2:2 dependent:2 unlikely:1 hidden:1 visualisation:1 transformed:6 interested:1 pixel:1 sketched:1 arg:3 flexible:1 classification:11 denoted:1 priori:1 overall:1 among:3 arccos:1 smoothing:1 art:4 special:2 constrained:5 equal:1 construct:3 unsupervised:21 future:1 report:2 others:1 develops:1 few:2 randomly:1 preserve:1 kandola:1 usc:2 semigroups:1 geometry:1 sdr:9 interest:5 centralized:1 possibility:1 investigate:1 highly:1 elucidating:1 regressing:1 alignment:3 kukkvk:1 semidefinite:4 yielding:1 hg:1 partial:1 xy:9 taylor:1 initialized:1 walk:4 plotted:1 confusable:1 instance:1 earlier:1 measuring:1 maximization:1 subset:1 eigenmaps:1 tishby:1 reported:2 dependency:2 para:1 kxi:3 synthetic:3 cho:1 recht:1 borgwardt:3 international:3 siam:1 decoding:1 michael:1 satisfied:1 hky:3 american:2 li:2 gy:2 rsf:10 bold:2 wk:2 notable:1 depends:1 later:2 multiplicative:1 view:1 kendall:1 start:2 hf:1 option:1 complicated:2 defer:1 rbfs:1 ass:2 minimize:1 cxy:3 variance:2 maximized:1 correspond:1 yield:7 identify:4 handwritten:2 marginally:1 bxy:2 j6:1 obvious:1 proof:2 recovers:1 sampled:1 dataset:2 color:1 dimensionality:28 organized:1 hilbert:6 focusing:1 originally:2 higher:1 supervised:11 response:6 maximally:4 specify:1 improved:1 done:1 though:1 furthermore:3 smola:4 autoencoders:2 langford:1 hand:2 horizontal:1 nonlinear:12 overlapping:1 tsne:3 quality:4 reveal:1 building:1 effect:2 requiring:1 normalized:1 equality:1 regularization:1 symmetric:1 laboratory:1 i2:2 visualizing:1 indistinguishable:1 criterion:1 performs:1 bring:1 silva:1 stiefel:2 image:8 harmonic:1 novel:1 recently:2 superior:1 functional:1 empirically:2 extend:1 association:2 marginals:1 significant:3 refer:5 rd:2 unconstrained:1 consistency:1 similarly:3 shawe:1 similarity:1 multivariate:1 recent:1 optimizing:2 moderate:1 perplexity:1 verlag:1 nonconvex:1 binary:1 outperforming:1 der:1 seen:1 minimum:1 gig:1 ey:1 maximize:2 ii:1 full:1 desirable:1 gretton:4 rahimi:1 smooth:1 cross:1 sphere:1 bach:3 laplacian:2 calculates:1 prediction:1 variant:2 regression:12 involving:1 basic:1 vision:2 expectation:1 desideratum:1 rbn:10 metric:2 kernel:75 addition:1 sch:1 minb:1 eigenfunctions:1 comment:1 subject:2 induced:1 spirit:2 jordan:7 leverage:1 split:2 maxb:2 embeddings:16 variety:1 independence:23 xj:12 identified:1 bandwidth:5 inner:1 idea:5 knowing:1 angeles:2 bottleneck:2 whether:1 motivated:2 pca:7 gb:5 song:3 hessian:1 york:1 deep:2 useful:1 generally:1 clear:1 involve:1 nonparametric:4 stein:1 locally:1 tenenbaum:1 induces:1 category:1 simplest:1 generate:1 exist:1 nsf:1 estimated:1 overly:1 key:2 nevertheless:1 drawn:1 diffusion:4 asymptotically:1 angle:1 inverse:2 powerful:1 reporting:1 reasonable:1 wu:1 draw:1 maaten:1 pik:1 layer:1 display:1 quadratic:2 annual:1 adapted:1 vectorial:1 orthogonality:1 cubicly:1 fei:1 constrain:1 x2:1 svens:1 constraint:1 bousquet:1 generates:1 aspect:1 min:1 mized:1 spearman:1 slightly:2 character:1 appealing:4 making:1 nilsson:1 christensen:1 intuitively:3 handling:2 taken:1 chunking:1 computationally:1 visualization:8 discus:2 turn:1 end:1 available:1 apply:4 spectral:1 schmidt:3 rkhss:1 weinberger:1 original:5 clustering:9 include:2 ensure:1 restrictive:1 especially:3 classical:5 parametric:1 sha:2 primary:1 dependence:4 strategy:1 bialek:1 visiting:1 southern:2 gradient:3 subspace:2 distance:2 link:2 separate:2 manifold:4 argue:1 cauchy:8 discriminant:2 urtasun:1 ressel:1 assuming:2 index:2 relationship:2 mini:1 liang:1 setup:1 sne:22 favorably:1 trace:9 stated:1 rise:1 implementation:1 anal:1 adjustable:1 perform:2 twenty:1 upper:2 vertical:1 neuron:1 observation:1 displayed:1 hinton:2 communication:1 precise:1 rn:2 discovered:1 reproducing:3 cast:1 pair:2 specified:1 nonlinearly:1 optimized:1 r256:1 kkx:1 philosophical:1 california:3 address:1 able:2 pattern:3 sparsity:5 program:1 built:1 including:2 max:1 suitable:2 overlap:1 natural:1 examination:1 misclassification:1 predicting:3 residual:1 identifies:1 axis:2 excel:1 autoencoder:2 text:1 review:1 literature:4 acknowledgement:1 geometric:1 kf:2 par:1 interesting:2 localized:1 degree:1 sufficient:6 vectorized:1 pij:3 principle:1 row:7 supported:1 copy:1 free:1 side:2 guide:1 senior:1 neighbor:6 saul:3 sparse:2 distributed:5 van:1 dimension:21 xn:4 stand:1 avoids:1 lafon:1 concretely:2 made:1 stuck:1 preprocessing:1 transaction:1 compact:2 global:1 conclude:1 xi:16 alternatively:1 search:2 latent:1 table:2 additionally:3 learn:1 ku:2 reasonably:2 ca:3 correlated:1 investigated:1 main:1 linearly:1 rh:1 motivation:1 n2:1 repeated:1 sliced:2 x1:1 fig:7 referred:1 benefited:1 en:1 theme:1 pereira:1 lie:1 third:2 cyclically:1 specific:4 bishop:2 offset:2 experimented:2 exists:1 aria:1 kx:7 margin:2 gap:1 easier:1 yin:1 visual:2 failed:1 partially:1 collectively:1 kky:1 springer:2 fokker:1 minimizer:2 acm:3 conditional:10 identity:2 viewed:2 goal:1 rbf:3 fisher:1 infinite:2 specifically:1 uniformly:1 typical:1 principal:2 lemma:1 meaningful:2 grassman:1 berg:1 kernelbased:1 eigenmap:1 dept:3 heaviside:2 ex:1 |
3,449 | 4,123 | Repeated Games against Budgeted Adversaries
Manfred K. Warmuth?
Department of Computer Science
UC Santa Cruz
[email protected]
Jacob Abernethy?
Division of Computer Science
UC Berkeley
[email protected]
Abstract
We study repeated zero-sum games against an adversary on a budget. Given that
an adversary has some constraint on the sequence of actions that he plays, we
consider what ought to be the player?s best mixed strategy with knowledge of
this budget. We show that, for a general class of normal-form games, the minimax strategy is indeed efficiently computable and relies on a ?random playout?
technique. We give three diverse applications of this new algorithmic template:
a cost-sensitive ?Hedge? setting, a particular problem in Metrical Task Systems,
and the design of combinatorial prediction markets.
1
Introduction
How can we reasonably expect to learn given possibly adversarial data? Overcoming this obstacle
has been one of the major successes of the Online Learning framework or, more generally, the
so-called competitive analysis of algorithms: rather than measure an algorithm only by the cost it
incurs, consider this cost relative to an optimal ?comparator algorithm? which has knowledge of
the data in advance. A classic example is the so-called ?experts setting?: assume we must predict a
sequence of binary outcomes and we are given access to a set of experts, each of which reveals their
own prediction for each outcome. After each round we learn the true outcome and, hence, which
experts predicted correctly or incorrectly. The expert setting is based around a simple assumption,
that while some experts? predictions may be adversarial, we have an a priori belief that there is at
least one good expert whose predictions will be reasonably accurate. Under this relatively weak
good-expert assumption, one can construct algorithms that have quite strong loss guarantees.
Another way to interpret this sequential prediction model is to treat it as a repeated two-player
zero-sum game against an adversary on a budget; that is, the adversary?s sequence of actions is
restricted in that play ceases once the adversary exceeds the budget. In the experts setting, the
assumption ?there is a good expert? can be reinterpreted as a ?nature shall not let the best expert err
too frequently?, perhaps more than some fixed number of times.
In the present paper, we develop a general framework for repeated game-playing against an adversary on a budget, and we provide a simple randomized strategy for the learner/player for a particular
class of these games. The proposed algorithms are based on a technique, which we refer to as a
?random playout?, that has become a very popular heuristic for solving games with massively-large
state spaces. Roughly speaking, a random playout in an extensive-form game is a way to measure
the likely outcome at a given state by finishing the game randomly from this state. Random playouts, often known simply as Monte Carlo methods, have become particularly popular for solving
the game of Go [5], which has led to much follow-up work for general games [12, 11]. The Budgeted Adversary game we consider also involves exponentially large state spaces, yet we achieve
efficiency using these random playouts. The key result of this paper is that the proposed random
playout is not simply a good heuristic, it is indeed minimax optimal for the games we consider.
?
?
Supported by a Yahoo! PhD Fellowship and NSF grant 0830410.
Supported by NSF grant IIS-0917397.
1
Abernethy et al [1] was the first to use a random playout strategy to optimally solve an adversarial
learning problem, namely for the case of the so-called Hedge Setting introduced by Freund and
Schapire [10]. Indeed, their model can be interpreted as a particular special case of a Budgeted
Adversary problem. The generalized framework that we give in the first half of the paper, however,
has a much larger range of applications. We give three such examples, described briefly below.
More details are given in the second half of the paper.
Cost-sensitive Hedge Setting. In the standard Hedge setting, it is assumed that each expert suffers
a cost in [0, 1] on each round. But a surprisingly-overlooked case is when the cost ranges differ,
where expert i may suffer per-round cost in [0, ci ] for some fixed ci > 0. The vanilla approach, to
use a generic bound of maxi ci , is extremely loose, and we know of no better bounds for this case.
Our results provide the optimal strategy for this cost-sensitive Hedge setting.
Metrical Task Systems (MTS). The MTS problem is decision/learning problem similar to the
Hedge Setting above but with an added difficulty: the learner is required to pay the cost of moving
through a given metric space. Finding even a near-optimal generic algorithm has remained elusive
for some time, with recent encouraging progress made in one special case [2], for the so-called
?weighted-star? metric. Our results provide a simple minimax optimal algorithm for this problem.
Combinatorial Prediction Market Design: There has been a great deal of work in designing socalled prediction markets, where bettors may purchase contracts that pay off when the outcome of a
future event is correctly guessed. One important goal of such markets is to minimize the potential
risk of the ?market maker? who sells the contracts and pays the winning bettors. Another goal is
to design ?combinatorial? markets, that is where the outcome space might be complex. The latter
has proven quite challenging, and there are few positive results within this area. We show how
to translate the market-design problem into a Budgeted Adversary problem, and from here how to
incorporate certain kinds of combinatorial outcomes.
2
Preliminaries
Notation: We shall write [n] for the set {1, 2, . . . , n}, and [n]? to be the set of all finite-length
sequences of elements of [n]. We will use the greek symbols ? and ? to denote such sequences
i1 i2 . . . iT , where it ? [n]. We let ? denote the empty sequence. When we have defined some
T -length sequence ? = i1 i2 . . . iT , we may write ?t to refer to the t-length prefix of ?, namely
?t = i1 i2 . . . it , and clearly t ? T . We will generally use w to refer to a distribution in ?n , the
n-simplex, where wi denotes the ith coordinate of w. We use the symbol ei to denote the ith basis
vector in n dimensions, namely a vector with a 1 in the ith coordinate, and 0?s elsewhere. We shall
use 1[?] to denote the ?indicator function?, where 1[predicate] is 1 if predicate is true, and
0 if it is false. It may be that predicate is a random variable, in which case 1[predicate] is a
random variable as well.
2.1
The Setting: Budgeted Adversary Games
We will now describe the generic sequential decision problem, where a problem instance is characterized by the following triple: an n ? n loss matrix M , a monotonic ?cost function? cost :
[n]? ? R+ , and a cost budget k. A cost function is monotonic as long as it satisfies the relation
cost(??) ? cost(?i?) for all ?, ? ? [n]? and all i ? [n]. Play proceeds as follows:
1. On each round t, the player chooses a distribution wt ? ?n over his action space.
2. An outcome it ? [n] is chosen by Nature (potentially an adversary).
3. The player suffers wt> M eit .
4. The game proceeds until the first round in which the budget is spent, i.e. the round T when
cost(i1 i2 . . . iT ?1 ) ? k < cost(i1 i2 . . . iT ?1 iT ).
The goal of the Player is to choose each wt in order to minimize the total cost of this repeated game
on all sequences of outcomes. Note, importantly, that the player can learn from the past, and hence
would like an efficiently computable function w : [n]? ? ?n , where on round t the player is given
?t?1 = (i1 . . . it?1 ) and sets wt ? w(?t?1 ). We can define the worst-case cost of an algorithm
2
w : [n]? ? ?n by its performance against a worst-case sequence, that is
WorstCaseLoss(w; M, cost, k) :=
max
? = i1 i2 . . . ? [n]?
cost(?T ?1 ) ? k < cost(?T )
T
X
w(?t?1 )> M eit .
t=1
Note that above T is a parameter chosen according to ? and the budget. We can also define the minimax loss, which is defined by choosing the w(?) which minimizes WorstCaseLoss(?). Specifically,
MinimaxLoss(M, cost, k) :=
min
?
w:[n] ??n
max
? = i1 i2 . . . ? [n]?
cost(?T ?1 ) ? k < cost(?T )
T
X
w(?t?1 )> M eit .
t=1
In the next section, we describe the optimal algorithm for a restricted class of M . That is, we obtain
the mapping w which optimizes WorstCaseLoss(w; M, cost, k).
3
The Algorithm
We will start by assuming that M is a nonnegative diagonal matrix, that is M = diag(c1 , c2 , . . . , cn ),
i
and ci > 0 for all i. With these values ci , define the distribution q ? ?n with qi := P1/c
1/cj .
j
Given a current state ?, the algorithm will rely heavily on our ability to compute the following
function ?(?). For any ? ? [n]? such that cost(?) > k, define ?(?) := 0. Otherwise, let
"?
#
X
1
E
1[cost(?i1 . . . it ) ? k]
?(?) := P
i 1/ci ?t:it ?q
t=0
Notice, this is the expected length of a random process. Of course, we must impose the natural condition that the length of this process has a finite expectation. Also, since we assume that the cost increases, it is reasonable to require that the distribution over the length, i.e. min{t : cost(?i1 . . . it ) >
k}, has an exponentially decaying tail. Under these weak conditions, the following m-trial Monte
Carlo method will provide a high probability estimate to error within O(m?1/2 ).
Algorithm 1 Efficient Estimation of ?(?)
for i=1. . . m do
Sample: infinite random sequence ? := i1 i2 . . . where Pr(it = i) = qi
Let: Ti = max{t : cost(??t?1 ) ? k}
end for Pm
Ti
Return i=1
m
Notice that the infinite sequence ? does not have to be fully generated. Instead, we can continue to
sample the sequence and simply stop when the condition cost(??t?1 ) ? k is reached. We can now
define our algorithm in terms of ?(?).
Algorithm 2 Player?s optimal strategy
Input: state ?
Compute: ?(?), ?(?, 1), ?(?, 2), . . . , ?(?, n)
Let: set w(?) with values wi (?) = ?(?)??(?,i)
ci
4
Minimax Optimality
Now we prove that Algorithm 2 is both ?legal? and minimax optimal.
Lemma 4.1. The vector w(?) computed in Algorithm 2 is always a valid distribution.
3
Proof. It must first be established that wi (?) ? 0 for all i and ?. This, however, follows because we
assume that the function cost() is monotonic, which implies that cost(??) ? cost(?i?) and hence
cost(?i?) ? k =? cost(??) ? k, and hence 1[cost(?i?) ? k] ? 1[cost(??) ? k]. Taking the
expected difference of the infinite sum of these two indicators leads to ?(?) ? ?(?i) ? 0, which
implies wi (?) ? 0 as desired.
P
We must also show that i wi (?) = 1. We claim that the following recurrence relation holds for
the function ?(?) whenever cost(?) ? k:
X
1
?(?) = P
+
qi ?(?i), for any ? s.t. cost(?) < k.
i 1/ci
i
| {z } | {z }
first step
remaining steps
This is clear from noticing that ? is an expected
random walk length, with transition probabilities
P
defined by q, and scaled by the constant ( i 1/ci )?1 . Hence,
!
X
X ?(?) ? ?(?i)
X
X ?(?i)
wi (?) =
=
1/ci ?(?) ?
ci
ci
i
i
i
i
!
!
X
X ?(?i)
X
1
P
= 1
+
qi ?(?i) ?
=
1/ci
ci
i 1/ci
i
i
i
where the last equality holds because qi =
P1/ci .
j 1/cj
Theorem 4.1. For M = diag(c1 , . . . , cn ), Algorithm 2 is minimax optimal for the Budgeted Adversary problem. Furthermore, ?(?) = MinimaxLoss(M, cost, k).
Proof. First we prove an upper bound. Notice that, for an sequence ? = i1 i2 i3 . . . iT , the total cost
of Algorithm 2 will be
T
X
w(?t?1 )> M eit =
T
X
wit (?t?1 )cit =
cit
t=1
t=1
t=1
T
X
?(?t?1 ) ? ?(?t )
cit = ?(?) ? ?(?T ) ? ?(?)
and hence the total cost of the algorithm is always bounded by ?(?).
On the other hand, we claim that ?(?) can always be achieved by an adversary for any algorithm
w0 (?). Construct a sequence ? as follows. Given that ?t?1 has been constructed so far, select any
coordinate it ? [n] for which wit (?t?1 ) ? wi0t (?t?1 ), that is, where the the algorithm w0 places at
least as much weight on it as the proposed algorithm w we defined in Algorithm 2. This must always
be possible because both w(?t?1 ) and w0 (?t?1 ) are distributions and neither can fully dominate the
other. Set ?t ? ?t?1 i. Continue constructing ? until the budget is reached, i.e. cost(?) > k. Now,
let us check the loss of w0 on this sequence ?:
T
X
w0 (?t?1 )> M eit =
t=1
T
X
wi0t (?t?1 )cit ?
t=1
T
X
wit (?t?1 )cit = ?(?) ? ?(?) = ?(?)
t=1
Hence, an adversary can achieve at least ?(?) loss for any algorithm w0 .
4.1
Extensions
For simplicity of exposition, we proved Theorem 4.1 under a somewhat limited scope: only for
diagonal matrices M , known budget k and cost(). But with some work, these restrictions can be
lifted. We sketch a few extensions of the result, although we omit the details due to lack of space.
First, the concept of a cost() function and a budget k is not entirely necessary. Indeed, we can
redefine the Budgeted Adversary game in terms of an arbitrary stopping criterion ? : [n]? ? {0, 1},
where ?(?) = 0 is equivalent to ?the budget has been exceeded?. The only requirement is that ?()
is monotonic, which is naturally defined as ?(?i?) = 1 =? ?(??) = 1 for all ?, ? ? [n]? and
all i ? [n]. This alternative budget interpretation lets us consider the sequence ? as a path through
4
a game tree. At a given node ?t of the tree, the adversary?s action it+1 determines which branch to
follow. As soon as ?(?t ) = 0 we have reached a terminal node of this tree.
Second, we need not assume that the budget k, or even the generalized stopping criterion ?(), is
known in advance. Instead, we can work with the following generalization: the stopping criterion ?
is drawn from a known prior ? and given to the adversary before the start of the game. The resulting
optimal algorithm depends simply on estimating a new version of ?(?). ?(?) is now redefined as
both an expectation over a random ? and a random ? drawn from the posterior of ?, that is where
we condition on the event ?(?) = 1.
Third, Theorem 4.1 can be extended to a more general class of M , namely inverse-nonnegative
matrices, where M is invertible and M ?1 has all nonnegative entries. (In all the examples we give
we need only diagonal M , but we sketch this generalization for completeness). If we let 1n be
the vector of n ones, then define D = diag?1 (M ?1 1n ), which is a nonnegative diagonal matrix.
Also let N = DM ?1 and notice that the rows of N are the normalized rows of M ?1 . We can
use Algorithm 2 with the diagonal matrix D, and attain distribution w0 (?) for any ?. To obtain an
algorithm for the matrix M (not D), we simply let w(?) = (w0 (?)> N )> , which is guaranteed to
be a distribution. The loss of w is identical to w0 since w(?)> M = w0 (?)> D by construction.
Fourth, we have only discussed minimizing loss against a budgeted adversary. But all the results
can be extended easily to the case where the player is instead maximizing gain (and the adversary
is minimizing). A particularly surprising result is that the minimax strategy is identical in either
case; that is, the the recursive definition of wi (?) is the same whether the player is maximizing
or minimizing. However, the termination condition might change depending on whether we are
minimizing or maximizing. For example in the expert setting, the game stops when all experts have
cost larger than k versus at least one expert has gain at least k. Therefore for the same budget size
k, the minimax value of the gain version is typically smaller than the value of the loss version.
Simplified Notation. For many examples, including two that we consider below, recording the
entire sequence ? is unnecessary?the only relevant information is the number of times each i occurs
in ? and not where it occurs. This is the case precisely when the function cost(?) is unchanged up
to permutations of ?. In such situations, we can consider a smaller state space, which records the
?counts? of each i in the sequence ?. We will use the notation s ? Nn , where st = ei1 + . . . + eit
for the sequence ?t = i1 i2 . . . it .
5
The Cost-Sensitive Hedge Setting
A straightforward application of Budgeted Adversary games is the ?Hedge setting? introduced by
Freund and Schapire [10], a version of the aforementioned experts setting. The minimax algorithm
for this special case was already thoroughly developed by Abernethy et al [1]. We describe an
interesting extension that can be achieved using our techniques which has not yet been solved.
The Hedge game goes as follows. A learner must predict a sequence of distributions
P wt ? ?n , and
receive a sequence of loss vectors `t ? {0, 1}n . The total loss to the learner
is
t wt ? `t , and the
P
game ceases only once the best expert has more than k errors, i.e. mini t `t,i > k. The learner
wants to minimize his total loss.
The natural way to transform the Hedge game into a Budgeted Adversary problem is as follows.
We?ll let s be the state, defined as the vector of cumulative losses of all the experts.
"
#
1
X
X
..
M=
cost(s) = min si
wt ? `t =
wt> M eit
.
i
1
t
t
The proposed reduction almost works, except for one key issue: this only allows cost vectors of the
form `t = M eit = eit , since by definition Nature chooses columns of M . However, as shown in
Abernethy et al, this is not a problem.
Lemma 5.1 (Lemma 11 and Theorem 12 of [1]). In the Hedge game, the worst case adversary
always chooses `t ? {e1 , . . . , en }.
The standard and more well-known, although non-minimax, algorithm for the Hedge setting [10]
uses a simple modification of the Weighted Majority Algorithm [14], and is described simply by
5
setting wi (s) =
Pexp(??si ) .
j exp(??sj )
?
With the appropriate tuning of ?, it is possible to bound the total
loss of this algorithm by k + 2k ln n + ln n, which is known to be roughly optimal in the limit.
Abernethy et al [1] provide the minimax optimal algorithm, but state the bound in terms of an
expected length of a random walk. This is essentially equivalent to our description of the minimax
cost in terms of ?(?).
A significant drawback of the Hedge result, however, is that it requires the losses to be uniformly
bounded in [0, 1], that is `t ? [0, 1]n . Ideally, we would like an algorithm and a bound that can handle
non-uniform cost ranges, i.e. where expert i suffers loss in some range [0, ci ]. The `t,i ? [0, 1]
assumption is fundamental to the Hedge analysis, and we see no simple way of modifying it to
achieve a tight?bound. The simplest trick, which is just to take cmax := maxi ci , leads to a bound of
the form k + 2cmax k ln n + cmax ln n which we know to be very loose. Intuitively, this is because
only a single ?risky? expert, with a large ci , should not affect the bound significantly.
In our Budgeted Adversary framework, this case can be dealt with trivially: letting M =
diag(c1 , . . . , cn ) and cost(s) = mini si ci gives us immediately an optimal algorithm that, by Theorem 4.1, we know to be minimax optimal. According to the same theorem, the minimax loss bound
is simply ?(?) which, unfortunately, is in terms of a random walk length. We do not know how to
obtain a closed form estimate of this expression, and we leave this as an intriguing open question.
6
Metrical Task Systems
A classic problem from the Online Algorithms community is known as Metrical Task Systems
(MTS), which we now describe. A player (decision-maker, algorithm, etc.) is presented with a
finite metric space and on each of a sequence of rounds will occupy a single state (or point) within
this metric space. At the beginning of each round the player is presented with a cost vector, describing the cost of occupying each point in the metric space. The player has the option to remain at the
his present state and pay this states associated cost, or he can decide to switch to another point in
the metric and pay the cost of the new state. In the latter case, however, the player must also pay the
switching cost which is exactly the metric distance between the two points.
The MTS problem is a useful abstraction for a number of problems; among these is job-scheduling.
An algorithm would like to determine on which machine, across a large network, it should process a
job. At any given time point, the algorithm observes the number of available cycles on each machine,
and can choose to migrate the job to another machine. Of course, if the subsequent machine is a
great distance, then the algorithm also pays the travel time of the job migration through the network.
Notice that, were we given a sequence of cost vectors in advance, we could compute the optimal path
of the algorithm that minimized total cost. Indeed, this is efficiently solved by dynamic programming, and we will refer to this as the optimal offline cost, or just the offline cost. What we would
like is an algorithm that performs well relative to the offline cost without knowledge of the sequence
of cost vectors. The standard measure of performance for an online algorithm is the competitive
ratio, which is the ratio of cost of the online algorithm to the optimal offline cost. For all the results
discussed below, we assume that the online algorithm can maintain a randomized state?a distribution over the metric?and pays the expected cost according to this random choice (Randomized
algorithms tend to exhibit much better competitive ratios than deterministic algorithms).
When the metric is uniform, i.e. where all pairs of points are at unit distance, it is known that
the competitive ratio is O(log n), where n is the number of points in the metric; this was shown
by Borodin, Linial and Saks who introduced the problem [4]. For general metric spaces, Bartal et
al achieved a competitive ratio of O(log6 n) [3], and this was improved to O(log2 n) by Fiat and
Mendel [9]. The latter two techniques, however, rely on a scheme of randomly approximating the
metric space with a hierarchical tree metric, adding a (likely-unnecessary) multiplicative cost factor
of log n. It is widely believed that the minimax competitive ratio is O(log n) in general, but this gap
has remained elusive for at least 10 years.
The most significant progress towards O(log n) is the 2007 work of Bansal et al [2] who achieved
such a ratio for the case of ?weighted-star metrics?. A weighted star is a metric such that each point
i has a fixed distance di from some ?center state?, and traveling between any state i and j requires
6
going through the center, hence incurring a switching cost of di + dj . For weighted-star metrics,
Bansal et al managed to justify two simplifications which are quite useful:
1. We can assume that the cost vector is of the form h0, . . . , ?, . . . , 0i; that is, all state receive
0 cost, except some state i which receives an infinite cost.
2. When the online algorithm is currently maintaining a distribution w over the metric, and an
infinite cost occurs at state i, we can assume1 that algorithm incurs exactly 2di wi , exactly
the cost of having wi probability weight enter and leave i from the center.
Bansal et al provide an efficient algorithm for this setting using primal-dual techniques developed
for solving linear programs. With the methods developed in the present paper, however, we can give
the minimax optimal online algorithm under the above simplifications. Notice that the adversary is
now choosing a sequence of states i1 , i2 , i3 . . . ? [n] at which to assign an infinite cost. If we let
? = i1 i2 i3 . . ., then the online algorithm?s job is to choose a sequence of distributions w(?t ), and
pays 2dit+1 wit+1 (?t ) at each step. In the end, the online algorithm?s cost is compared to the offline
MTS cost of ?, which we will call cost(?). Assume2 we know the cost of the offline in advance, say
it?s k, and let us define M = diag(2d1 , . . . , 2dn ). Then the player?s job is to select an algorithm w
which minimizes
T
X
max
w(?t?1 )> M eit .
? = (i1 , . . . , iT )
cost(?) ? k
t=1
As we have shown, Algorithm 2 is minimax optimal for this setting.
The competitive ratio of this
algorithm is precisely lim supk?? k1 MinimaxLoss(M, cost, k) . Notice the convenient trick here:
by bounding a priori the cost of the offline at k, we can simply imagine playing this repeated game
until the budget k is achieved. Then the competitive ratio is just the worst-case loss over the offline
cost, k. On the downside, we don?t know of any easy way to bound the worst-case loss ?(?).
7
Combinatorial Information Markets
We now consider the design of so-called cost-function-based information markets, a popular type
of prediction market. This work is well-developed by Chen and Pennock [7], with much useful
discussion by Chen and Vaughn [8]. We refer the reader to the latter work, which provides a very
clear picture of the nice relationship between online learning and the design of information markets.
In the simplest setting, a prediction market is a mechanism for selling n types of contract, where
a contract of type i corresponds to some potential future outcome, say ?event i will occur?. The
standard assumption is that the set of possible outcomes are mutually exclusive, so only one of the
n events will occur?for example, a pending election with n competing candidates and one eventual
winner. When a bettor purchases a contract of type i, the manager of the market, or ?market maker?,
promises to pay out $1 if the outcome is i and $0 otherwise.
A popular research question in recent years is how to design such prediction markets when the outcome has a combinatorial structure. An election might produce a complex outcome like a group
of candidates winning, and a bettor may desire to bet on a complex predicate, such as ?none of
the winning candidates will be from my state?. This question is explored in Hanson [13], although
without much discussion of the relevant computational issues. The computational aspects of combinatorial information markets are addressed in Chen et al [6], who provide a particular hardness
result regarding computation of certain price functions, as well as a positive result for an alternative
type of combinatorial market. In the present section, we propose a new technique for designing
combinatorial markets using the techniques laid out in the present work.
In this type of information market, the task of a market maker is to choose a price for each of
the n contracts, but where the prices may be set adaptively according to the present demand. Let
s ? Nn denote the current volume, where si is the number of contracts sold of type i. In a costfunction-based market, these prices are set according to a given convex ?cost function? C(s) which
1
Precisely, they claim that it should be upper-bounded by 4di . We omit the details regarding this issue, but
it only contributes a multiplicative factor of 2 to the competitive ratio.
2
Even when we do not know the offline cost in advance, standard ?doubling tricks? allow you to guess this
value and increase the guess as the game proceeds. For space, we omit these details.
7
represents a potential on the demand. It is assumed that C(?) satisfies the relation C(s + ?1) =
2
C(s) + ? for all s, and ? > 0 and ??sC2 > 0. A typical example of such a cost function is C(s) =
i
Pn
b log i=1 exp(si /b) where b is a parameter (see Chen and Pennock for further discussion [7]); it?s
easy to check this function satisfies the desired properties.
Given the current volume s, the price of contract i is set at C(s + ei ) ? C(s). This pricing scheme
has the advantage that the total money earned in this market is easy to compute: it?s exactly C(s)
regardless of the order in which the contracts were purchased. A disadvantage of this market, however, is that the posted prices (typically) sum to greater than $1! A primary goal of an information
market is to incentivize bettors to reveal their private knowledge of the outcome of an event. If a
given bettor believes the true distribution of the outcome to be q ? ?n , he will have an incentive to
purchase any contract i for which the current price pi is smaller than qi , thus providing positive expected reward (relative to his predicted distribution). Using this cost-function scheme, it is possible
that qi < C(s + ei ) ? C(s) for all i and hence a bettor will have no incentive to bet.
We propose instead an alternative market mechanism that avoids this difficulty: for every given
volume state s, the market maker will advertise a price vector w(s) ? ?n . If a contract of type i is
purchased, the state proceeds to s + ei , and the market maker
Pearns wi (s). If a sequence of contracts
i1 i2 . . . is purchased, the market maker?s total earning is t w(ei1 + . . . + eit?1 ) ? eit . On the
other hand, if the final demand is s, in the worst case the market maker may have to payout a total of
maxi si dollars. If we assume the market maker has a fixed budget k on the max number of contracts
he is willing to sell, and wants to maximize the total earned money from selling contracts subject to
this constraint, then we have3 exactly a Budgeted Adversary problem: let M be the identity and let
cost(s) := maxi si .
This looks quite similar to the Budgeted Adversary reduction in the Hedge Setting described above,
which is perhaps not too surprising given the strong connections discovered in Chen and Vaughn [8]
between learning with experts and market design. But this reduction gives us additional power: we
now have a natural way to design combinatorial prediction markets. We sketch one such example,
but we note that many more can be worked out also.
Assume we are in a setting where we have n election candidates, but some subset of size m < n will
become the ?winners?, and any such subset is possible. In this case, we can imagine a market maker
selling a contract of type i with the following promise: if candidate i is in the winning subset, the
payout is 1/m and 0Potherwise. For similar reasons as above, the market maker should sell contracts
at prices pi where i pi = 1. If we assume that market maker has a budget constraint of k for
the final payout, then we can handle this new setting within the Budgeted Adversary framework by
simply modifying the cost function appropriately:
X si
cost(s) =
max
.
m
U ?[n],|U |=m
i?U
This solution looks quite simple, so what did we gain? The benefit of our Budgeted Adversary
framework is that we can handle arbitrary monotonic budget constraints, and the combinatorial
nature of this problem can be encoded within the budget. We showed this for the case of ?subset
betting?, but it can be applied to a wide range of settings with combinatorial outcomes.
8
Open problem
We have provided a very general framework for solving repeated zero-sum games against a budgeted
adversary. Unfortunately, the generality of these results only go as far as games with payoff matrices
that are inverse-nonnegative. For one-shot games, of course, Von Neumann?s minimax theorem leads
us to an efficient algorithm, i.e. linear programming, which can handle any payoff matrix, and we
would hope this is achievable here. We thus pose the following open question: Is there an efficient
algorithm for solving Budgeted Adversary games for arbitrary matrices M ?
3
The careful reader may notice that this modified model may lead to a problem not present in the costfunction based markets: an arbitrage opportunity for the bettors. This issue can be dealt with by including a
sufficient transaction fee per contract, but we omit these details due to space constraints.
8
References
[1] J. Abernethy, M. K. Warmuth, and J. Yellin. Optimal strategies from random walks. In Proceedings of the 21st Annual Conference on Learning Theory (COLT 08), pages 437?445, July
2008.
[2] Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor. A Primal-Dual randomized algorithm
for weighted paging. In Proceedings of the 48th Annual IEEE Symposium on Foundations of
Computer Science, pages 507?517. IEEE Computer Society, 2007.
[3] Y. Bartal, A. Blum, C. Burch, and A. Tomkins. A polylog (n)-competitive algorithm for metrical task systems. In Proceedings of the twenty-ninth annual ACM symposium on Theory of
computing, page 711719, 1997.
[4] A. Borodin, N. Linial, and M. E Saks. An optimal on-line algorithm for metrical task system.
Journal of the ACM (JACM), 39(4):745763, 1992.
[5] B. Br?ugmann. Monte carlo go. Master?s Thesis, Unpublished, 1993.
[6] Y. Chen, L. Fortnow, N. Lambert, D. M Pennock, and J. Wortman. Complexity of combinatorial market makers. In Proceedings of the ACM Conference on Electronic Commerce (EC),
2008.
[7] Y. Chen and D. M Pennock. A utility framework for bounded-loss market makers. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, page 4956, 2007.
[8] Y. Chen and J. W Vaughan. A new understanding of prediction markets via No-Regret learning.
Arxiv preprint arXiv:1003.0034, 2010.
[9] A. Fiat and M. Mendel. Better algorithms for unfair metrical task systems and applications.
In Proceedings of the thirty-second annual ACM symposium on Theory of computing, page
725734, 2000.
[10] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to Boosting. J. Comput. Syst. Sci., 55(1):119?139, 1997. Special Issue for
EuroCOLT ?95.
[11] S. Gelly and D. Silver. Combining online and offline knowledge in UCT. In Proceedings of
the 24th international conference on Machine learning, page 280, 2007.
[12] S. Gelly, Y. Wang, R. Munos, and O. Teytaud. Modification of UCT with patterns in MonteCarlo go. 2006.
[13] R. Hanson. Combinatorial information market design.
5(1):107119, 2003.
Information Systems Frontiers,
[14] N. Littlestone and M. K. Warmuth. The Weighted Majority algorithm. Inform. Comput.,
108(2):212?261, 1994. Preliminary version in FOCS 89.
9
| 4123 |@word trial:1 private:1 version:5 briefly:1 achievable:1 open:3 termination:1 willing:1 jacob:1 incurs:2 shot:1 reduction:3 prefix:1 past:1 err:1 current:4 surprising:2 si:8 yet:2 intriguing:1 must:7 cruz:1 subsequent:1 half:2 intelligence:1 guess:2 warmuth:3 beginning:1 ith:3 record:1 manfred:2 completeness:1 provides:1 cse:1 node:2 boosting:1 mendel:2 teytaud:1 dn:1 ucsc:1 c2:1 become:3 constructed:1 symposium:3 focs:1 prove:2 naor:1 redefine:1 market:40 expected:6 hardness:1 p1:2 frequently:1 indeed:5 roughly:2 manager:1 terminal:1 worstcaseloss:3 eurocolt:1 encouraging:1 election:3 provided:1 estimating:1 notation:3 bounded:4 what:3 kind:1 interpreted:1 minimizes:2 developed:4 finding:1 ought:1 guarantee:1 berkeley:2 every:1 ti:2 exactly:5 scaled:1 unit:1 grant:2 omit:4 positive:3 before:1 treat:1 limit:1 switching:2 path:2 might:3 challenging:1 limited:1 range:5 commerce:1 thirty:1 recursive:1 regret:1 area:1 attain:1 significantly:1 convenient:1 scheduling:1 risk:1 vaughan:1 restriction:1 equivalent:2 deterministic:1 center:3 maximizing:3 elusive:2 go:5 straightforward:1 regardless:1 convex:1 wit:4 simplicity:1 immediately:1 importantly:1 dominate:1 his:4 classic:2 handle:4 coordinate:3 construction:1 play:3 heavily:1 imagine:2 programming:2 us:1 designing:2 trick:3 element:1 particularly:2 preprint:1 solved:2 wang:1 worst:6 cycle:1 earned:2 observes:1 complexity:1 reward:1 ideally:1 dynamic:1 solving:5 tight:1 linial:2 division:1 efficiency:1 learner:5 basis:1 selling:3 easily:1 eit:12 describe:4 monte:3 artificial:1 outcome:17 choosing:2 abernethy:6 h0:1 whose:1 quite:5 heuristic:2 solve:1 larger:2 widely:1 say:2 otherwise:2 saks:2 nikhil:1 ability:1 transform:1 final:2 online:11 sequence:27 advantage:1 propose:2 relevant:2 combining:1 translate:1 achieve:3 description:1 empty:1 requirement:1 bartal:2 costfunction:2 produce:1 neumann:1 silver:1 leave:2 spent:1 depending:1 develop:1 pexp:1 pose:1 vaughn:2 polylog:1 progress:2 job:6 strong:2 c:1 predicted:2 involves:1 implies:2 differ:1 greek:1 drawback:1 modifying:2 require:1 assign:1 generalization:3 niv:1 preliminary:2 extension:3 frontier:1 hold:2 around:1 normal:1 exp:2 great:2 algorithmic:1 predict:2 mapping:1 claim:3 scope:1 major:1 estimation:1 travel:1 combinatorial:14 currently:1 maker:14 sensitive:4 occupying:1 weighted:7 hope:1 clearly:1 always:5 i3:3 rather:1 modified:1 pn:1 lifted:1 bet:2 finishing:1 check:2 adversarial:3 dollar:1 abstraction:1 stopping:3 nn:2 typically:2 entire:1 relation:3 going:1 i1:17 issue:5 aforementioned:1 among:1 dual:2 colt:1 priori:2 yahoo:1 socalled:1 special:4 uc:2 construct:2 once:2 having:1 identical:2 sell:3 represents:1 look:2 purchase:3 future:2 simplex:1 minimized:1 few:2 randomly:2 maintain:1 reinterpreted:1 primal:2 metrical:7 accurate:1 necessary:1 bettor:8 tree:4 walk:4 desired:2 littlestone:1 instance:1 column:1 obstacle:1 downside:1 disadvantage:1 yoav:1 cost:93 paging:1 subset:4 entry:1 uniform:2 predicate:5 wortman:1 too:2 optimally:1 my:1 chooses:3 thoroughly:1 st:2 migration:1 fundamental:1 randomized:4 adaptively:1 international:1 contract:17 off:1 invertible:1 von:1 thesis:1 choose:4 possibly:1 payout:3 expert:21 return:1 syst:1 potential:3 star:4 depends:1 multiplicative:2 closed:1 reached:3 competitive:10 start:2 decaying:1 option:1 minimize:3 who:4 efficiently:3 guessed:1 dealt:2 weak:2 lambert:1 none:1 carlo:3 inform:1 suffers:3 whenever:1 definition:2 against:7 dm:1 naturally:1 proof:2 associated:1 di:4 seffi:1 stop:2 gain:4 proved:1 popular:4 knowledge:5 lim:1 cj:2 fiat:2 exceeded:1 follow:2 improved:1 generality:1 furthermore:1 just:3 uct:2 until:3 traveling:1 hand:2 sketch:3 receives:1 ei:4 lack:1 perhaps:2 reveal:1 pricing:1 concept:1 true:3 normalized:1 managed:1 hence:9 equality:1 i2:13 deal:1 round:9 ll:1 game:31 recurrence:1 criterion:3 generalized:2 bansal:4 theoretic:1 performs:1 encoded:1 mt:5 winner:2 exponentially:2 volume:3 fortnow:1 tail:1 he:4 interpretation:1 discussed:2 interpret:1 refer:5 significant:2 enter:1 tuning:1 vanilla:1 trivially:1 pm:1 rd:1 dj:1 moving:1 access:1 money:2 etc:1 posterior:1 own:1 recent:2 showed:1 optimizes:1 massively:1 advertise:1 certain:2 buchbinder:1 binary:1 success:1 continue:2 greater:1 somewhat:1 impose:1 additional:1 determine:1 sc2:1 maximize:1 july:1 ii:1 branch:1 exceeds:1 characterized:1 believed:1 long:1 e1:1 qi:7 prediction:12 essentially:1 metric:17 expectation:2 arxiv:2 achieved:5 c1:3 receive:2 fellowship:1 want:2 addressed:1 appropriately:1 pennock:4 recording:1 tend:1 subject:1 call:1 near:1 easy:3 switch:1 affect:1 competing:1 regarding:2 cn:3 computable:2 br:1 whether:2 expression:1 utility:1 suffer:1 speaking:1 action:4 migrate:1 generally:2 useful:3 santa:1 clear:2 cit:5 simplest:2 schapire:3 occupy:1 dit:1 nsf:2 notice:8 correctly:2 per:2 diverse:1 write:2 shall:3 promise:2 incentive:2 group:1 key:2 blum:1 drawn:2 budgeted:17 neither:1 incentivize:1 sum:5 year:2 yellin:1 inverse:2 noticing:1 uncertainty:1 fourth:1 you:1 master:1 place:1 almost:1 reasonable:1 decide:1 reader:2 laid:1 electronic:1 earning:1 decision:4 fee:1 entirely:1 bound:11 pay:10 guaranteed:1 simplification:2 nonnegative:5 annual:4 occur:2 constraint:5 precisely:3 worked:1 burch:1 aspect:1 extremely:1 min:3 optimality:1 relatively:1 betting:1 department:1 according:5 smaller:3 remain:1 across:1 wi:11 joseph:1 modification:2 intuitively:1 restricted:2 pr:1 legal:1 ln:4 mutually:1 describing:1 loose:2 count:1 mechanism:2 montecarlo:1 know:7 letting:1 end:2 available:1 incurring:1 hierarchical:1 generic:3 appropriate:1 alternative:3 denotes:1 remaining:1 tomkins:1 log2:1 maintaining:1 cmax:3 opportunity:1 gelly:2 k1:1 approximating:1 jake:1 society:1 unchanged:1 purchased:3 added:1 already:1 occurs:3 question:4 strategy:8 primary:1 exclusive:1 diagonal:5 exhibit:1 distance:4 sci:1 majority:2 w0:10 ei1:2 reason:1 assuming:1 length:9 relationship:1 mini:2 ratio:10 minimizing:4 providing:1 unfortunately:2 robert:1 potentially:1 design:10 redefined:1 twenty:1 upper:2 sold:1 finite:3 incorrectly:1 situation:1 extended:2 payoff:2 discovered:1 ninth:1 arbitrary:3 community:1 overcoming:1 overlooked:1 introduced:3 namely:4 required:1 pair:1 extensive:1 connection:1 unpublished:1 hanson:2 established:1 adversary:31 proceeds:4 below:3 pattern:1 borodin:2 program:1 max:6 including:2 belief:2 power:1 event:5 difficulty:2 rely:2 natural:3 indicator:2 minimax:19 scheme:3 risky:1 picture:1 prior:1 nice:1 understanding:1 relative:3 freund:3 loss:19 expect:1 fully:2 permutation:1 mixed:1 interesting:1 log6:1 proven:1 versus:1 triple:1 foundation:1 sufficient:1 playing:2 pi:3 row:2 arbitrage:1 elsewhere:1 course:3 supported:2 surprisingly:1 last:1 soon:1 offline:10 allow:1 wide:1 template:1 taking:1 munos:1 benefit:1 dimension:1 valid:1 transition:1 cumulative:1 avoids:1 made:1 simplified:1 far:2 ec:1 transaction:1 sj:1 reveals:1 assumed:2 unnecessary:2 don:1 learn:3 reasonably:2 nature:4 pending:1 contributes:1 complex:3 posted:1 constructing:1 diag:5 did:1 bounding:1 playout:5 repeated:7 en:1 winning:4 comput:2 candidate:5 unfair:1 third:1 theorem:7 remained:2 maxi:4 symbol:2 explored:1 cease:2 false:1 sequential:2 adding:1 ci:20 phd:1 budget:20 demand:3 gap:1 chen:8 led:1 simply:9 likely:2 jacm:1 desire:1 doubling:1 supk:1 monotonic:5 corresponds:1 satisfies:3 relies:1 determines:1 hedge:15 acm:4 comparator:1 goal:4 identity:1 exposition:1 towards:1 eventual:1 careful:1 price:9 change:1 specifically:1 infinite:6 except:2 uniformly:1 wt:8 justify:1 typical:1 lemma:3 called:5 total:11 player:16 select:2 latter:4 incorporate:1 d1:1 |
3,450 | 4,124 | Practical Large-Scale Optimization
for Max-Norm Regularization
Jason Lee
Institute of Computational and Mathematical Engineering
Stanford University
email: [email protected]
Ruslan Salakhutdinov
Brain and Cognitive Sciences and CSAIL
Massachusetts Institute of Technology
email: [email protected]
Benjamin Recht
Department of Computer Sciences
University of Wisconsin-Madison
email: [email protected]
Nathan Srebro
Toyota Technological Institute at Chicago
email: [email protected]
Joel A. Tropp
Computing and Mathematical Sciences
California Institute of Technology
email: [email protected]
Abstract
The max-norm was proposed as a convex matrix regularizer in [1] and was shown
to be empirically superior to the trace-norm for collaborative filtering problems.
Although the max-norm can be computed in polynomial time, there are currently
no practical algorithms for solving large-scale optimization problems that incorporate the max-norm. The present work uses a factorization technique of Burer
and Monteiro [2] to devise scalable first-order algorithms for convex programs
involving the max-norm. These algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems. Empirically, the new methods
outperform mature techniques from all three areas.
1
Introduction
A foundational concept in modern machine learning is to construct models for data by balancing
the complexity of the model against fidelity to the measurements. In a wide variety of applications,
such as collaborative filtering, multi-task learning, multi-class learning and clustering of multivariate
observations, matrices offer a natural way to tabulate data. For such matrix models, the matrix rank
provides an intellectually appealing way to describe complexity. The intuition behind this approach
holds that many types of data arise from a noisy superposition of a small number of simple (i.e.,
rank-one) factors.
Unfortunately, optimization problems involving rank constraints are computationally intractable except in a few basic cases. To address this challenge, researchers have searched for alternative complexity measures that can also promote low-rank models. A particular example of a low-rank regularizer that has received a huge amount of recent attention is the trace-norm, equal to the sum of
the matrix?s singular values (See the comprehensive survey [3] and its bibliography). The tracenorm promotes low-rank decompositions because it minimizes the `1 norm of the vector of singular
values, which encourages many zero singular values.
Although the trace-norm is a very successful regularizer in many applications, it does not seem to be
widely known or appreciated that there are many other interesting norms that promote low rank. The
1
paper [4] is one of the few articles in the machine learning literature that pursues this idea with any
vigor. The current work focuses on another rank-promoting regularizer, sometimes called the maxnorm, that has been proposed as an alternative to the rank for collaborative filtering problems [1, 5].
The max-norm can be defined via matrix factorizations:
o
n
(1)
kXkmax := inf kU k2,? kV k2,? : X = U V 0
where k?k2,? denotes the maximum `2 row norm of a matrix:
kAk2,? := maxj
X
k
A2jk
1/2
.
For general matrices, the computation of the max-norm can be rephrased as a semidefinite program; see (4) below. When X is positive semidefinite, we may force U = V and then verify that
kXkmax = maxj xjj , which should explain the terminology.
The fundamental result in the metric theory of tensor products, due to Grothendieck, states that the
max-norm is comparable with a nuclear norm (see Chapter 10 of [6]):
n
o
X
kXkmax ? inf k?k1 : X =
?j uj vj0 where kuj k? = 1 and kvj k? = 1 .
j
The factor of equivalence 1.676 ? ?G ? 1.783 is called Grothendieck?s constant. The trace-norm,
on the other hand, is equal to
n
o
X
kXktr := inf k?k1 : X =
?j uj vj0 where kuj k2 = 1 and kvj k2 = 1 .
j
This perspective reveals that the max-norm promotes low-rank decompositions with factors in `? ,
rather than the `2 factors produced by the trace-norm! Heuristically, we expect max-norm regularization to be effective for uniformly bounded data, such as preferences.
The literature already contains theoretical and empirical evidence that the max-norm is superior to
the trace-norm for certain types of problems. Indeed, the max-norm offers better generalization
error bounds for collaborative filtering [5], and it outperforms the trace-norm in small-scale experiments [1]. The paper [7] provides further evidence that the max-norm serves better for collaborative
filtering with nonuniform sampling patterns.
We believe that the max-norm has not achieved the same prominence as the trace-norm because of an
apprehension that it is challenging to solve optimization problems involving a max-norm regularizer.
The goal of this paper is to refute this misconception.
We provide several algorithms that are effective for very large scale problems, and we demonstrate
the power of the max-norm regularizer using examples from a variety of applications. In particular,
we study convex programs of the form
min f (X) + ? kXkmax
(2)
where f is a smooth function and ? is a positive penalty parameter. Section 4 outlines a proximalpoint method, based on the work of Fukushima and Mine [8], for approaching (2). We also study
the bound-constrained problem
min f (X) subject to
kXkmax ? B.
(3)
Of course, (2) and (3) are equivalent for appropriate choices of ? and B, but we describe scenarios
where there may be a preference for one versus the other. Section 3 provides a projected gradient
method for (3), and Section 5 develops a stochastic implementation that is appropriate for decomposable loss functions. These methods can be coded up in a few lines of numerical python or Matlab,
and they scale to huge instances, even on a standard desktop machine. In Section 6, we apply these
new algorithms to large-scale collaborative filtering problems, and we demonstrate performance superior to methods based on the trace-norm. We apply the algorithms to solve enormous instances
of graph cut problems, and we establish that clustering based on these cuts outperforms spectral
clustering on several data sets.
2
2
The SDP and Factorization Approaches
The max-norm of an m ? n matrix X can be expressed as the solution to a semidefinite program:
W1 X
kXkmax = min t subject to
0, diag(W1 ) ? t, diag(W2 ) ? t. (4)
0
X W2
Unfortunately, standard interior-point methods for this problem do not scale to matrices with more
than a few hundred rows or columns. For large-scale problems, we use an alternative formulation
suggested by (1) that explicitly works with a factorization of the decision variable X.
We employ an idea of Burer and Monteiro [2] that has far reaching consequences. The positive
definite constraint in the SDP formulation above is trivially satisfied if we define L and R via
0
W1 X
L L
.
=
X 0 W2
R R
Burer and Monteiro showed that as long as L and R have sufficiently many columns, then the global
optimum of (4) is equal to that of
kXkmax =
min
2
(L,R) : LR0 =X
2
max{kLk2,? , kRk2,? } .
(5)
In particular, we may assume that the number of columns is less than m + n. This formulation of the
max-norm is nonconvex because it involves a constraint on the product LR0 , but Burer and Monteiro proved that each local minimum of the reformulated problem is also a global optimum [9]. If
we select L and R to have a very small number of columns, say r, then the number of real decision
variables in the optimization problems (2) and (3) is reduced from mn to r(m + n), a dramatic
improvement in the dimensionality of the problem. On the other hand, the new formulation is nonconvex with respect to L and R so it might not be efficiently solvable. In what follows, we present
fast, first-order methods for solving (2) and (3) via this low-dimensional factored representation.
3
Projected Gradient Method
The constrained formulation (3) admits a simple projected gradient algorithm. We replace X with
the product LR0 and use the factored form of the max-norm (5) to obtain
minimize(L,R) f (LR0 )
2
2
subject to max{kLk2,? , kRk2,? } ? B.
(6)
The projected gradient descent method fixes a step size ? and computes updates with the rule
L
L ? ? ?f (LR)R
? PB
R
R ? ? ?f (LR)0 L
2
2
where PB denotes the Euclidean projection onto the set {(L, R) : max(kLk2,? , kRk2,? ) ? B}.
This
by re-scaling the rows of
? projection can be computed
?
?the current iterate whose norms exceed
B so their norms equal B. Rows with norms less than B are unchanged by the projection. The
projected gradient algorithm is elegant and simple, and it has an online implementation, described
below. Moreover, using an Armijo line search rule to guarantee sufficient decrease of the cost
function, we can guarantee convergence to a stationary point of (3); see [10, Sec. 2.3].
4
Proximal Point Method for Penalty Formulation
Solving (2) is slightly more complicated than its constrained counterpart. We employ a classical
proximal point method, proposed by Fukushima and Mine [8], which forms the algorithmic foundation of many popular first-order methods of for `1 -norm minimization [11, 12] and trace-norm
minimization [13, 14]. The key idea is that our cost function is the sum of a smooth term plus a
convex term. At each iteration, we replace the smooth term by a linear approximation. The new
cost function can then be minimized in closed form. Before describing the proximal point algorithm
in detail, we first discuss how a simple max-norm problem (the Frobenius norm plus a max-norm
penalty) admits an explicit formula for its unique optimal solution.
Consider the simple regularization problem
minimizeW
2
kW ? V k2F + ? kW k2,?
3
(7)
Algorithm 1 Compute W = squash(V , ?)
Require: A d ? D matrix V , a positive scalar ?.
2
Ensure: A d ? D matrix W ? arg minZ kZ ? V k2F + ? kZk2,? .
1: for k = 1 to d set nk ? kvk k2
2: sort {nk } in descending order. Let ? denote the sorting permutation such that n?(j) is the jth
largest element in the sequence.
Pk
3: for k = 1 to d set sk ? i=1 n?(i) .
sk
}
4: q ? max{k : n?(k) ? k+?
sq
5: ? ? q+?
6: for k = 1 to d, if k ? q, set w?(k) ? ?v?(k) /kv?(k) k2 . otherwise set w?(k) ? v?(k)
where W and V are d ? D matrices. Just as with `1 -norm and trace-norm regularization, this
problem can be solved in closed form. An efficient algorithm to solve (7) is given by Algorithm 1.
We call this procedure squash because the rows of V with large norm have their magnitude clipped
at a critical value ? = ?(V , ?).
Proposition 4.1 squash(V , ?) is an optimal solution of (7)
The proof of this proposition follows from an analysis of the KKT conditions for the regularized
problem. We include a full derivation in the appendix. Note that squash can be computed in
O(d max{log(d), D}) flops. Computing the row norms requires O(dD) flops, and then the sort
requires O(d log d) flops. Computing ? and q require O(d) operations. Constructing W then requires O(dD) operations.
With the squash function in hand, we can now describe our proximal-point algorithm. Replace
the decision variable X in (2) with LR0 . With this substitution and the factored form of the maxnorm, (5), Problem (2) reduces to
2
2
minimize(L,R) f (LR0 ) + ? max{kLk2,? , kRk2,? } .
(8)
L
.
R
2
2
= max{kLk2,? , kRk2,? }. Also let f?(A) denote f (LR0 ),
For ease of notation, define A to be the matrix of factors stacked on top of one another A =
2
With this notation, we have kAk2,?
2
and ?(A) := f?(A) + ? kAk2,? .
Using the squash algorithm, we can solve
2
minimizeh?f?(Ak ), Ai + ?k?1 kA ? Ak k2F + ? kAk2,?
(9)
in closed form. To see this, complete the square and multiply by ?k . Then (9) is equivalent to (7)
with the identifications
W = A, V =
Ak ? ?k ?f?(Ak ), ? = ?k ?. That is, the optimal solution
of (9) is squash Ak ? ?k ?f?(Ak ), ?k ? .
We can now directly apply the proximal-point algorithm of Fukushima and Mine, detailed in Algorithm 2. Step 2 is the standard linearized proximal-point method that is prevalent in convex algorithms like Mirror Descent and Nesterov?s optimal method. The cost function f? is replaced with a
quadratic approximation localized at the previous iterate Ak , and the resulting approximation (9)
can be solved in closed form. Step 3 is a backtracking line search that looks for a step that obeys
an Armijo step rule. This linesearch guarantees that the algorithm produces a sufficiently large decrease of the cost function at each iteration, but it may require several function evaluations to find l.
This algorithm is guaranteed to converge to a critical point of (8) as long as the step sizes are chosen
commensurate with the norm of the Hessian [8]. In particular, Nesterov has recently shown that if f?
has a Lipschitz-continuous gradient with Lipschitz constant L, then the algorithm will converge at a
rate of 1/k where k is the iteration counter [15].
4
Algorithm 2 A proximal-point method for max-norm regularization
Require: Algorithm parameters ? > 0, 1 > ? > 0, tol > 0. A sequence of positive numbers {?k }.
An initial point A0 = (L0 , R0 ) and a counter k set to 0.
Ensure: A critical point of (8).
1: repeat
?k . That is, A
?k ? squash Ak ? ?k ?f?(Ak ), ?k ? .
2:
Solve (9) to find A
3:
Compute the smallest nonnegative integer l such that
?k ? Ak )) ? ?(Ak ) ? ?? l kAk ? A
?k k2 .
?(Ak + ? l (A
F
4:
?k , k ? k + 1.
set Ak+1 ? (1 ? ? l )Ak + ? l A
5: until
5
? k k2
kAk ?A
F
kAk k2F
< tol
Stochastic Gradient
For many problems, including matrix completion and max-cut problems, the cost function decomposes over the individual entries in the matrix, so the function f (LR0 ) takes the particularly simple
form:
X
f (L, R) =
`(Yij , L0i Rj )
(10)
i,j?S
where ` is some fixed loss function, S is a set of row-column indices, Yij are some real numbers,
and Li and Rj denote the ith row of L and jth row of R respectively. When dealing with very
large datasets, S may consist of hundreds of millions of pairs, and there are algorithmic advantages
to utilizing stochastic gradient methods that only query a very small subset of S at each iteration.
Indeed, the above decomposition for f immediately suggests a stochastic gradient method: pick one
training pair (i, j) at random at each iteration, take a step in the direction opposite the gradient of
`(Yi,j , L0i Rj ) and then either apply the projection PB described in Section 3 or the squash function
described in 4.
2
The projection PB is particularly easy
? to compute in the stochastic setting. Namely, if kLi k > B,
we project it back so that kLi k = B, otherwise we do not do anything (and similarly for Rj ). We
need not look at any other rows of L and R. As we demonstrate in experimental results section, this
simple algorithm is computationally as efficient as optimization with the trace-norm.
We can also implement an efficient algorithm for stochastic gradient descent for problem (2). If we
wanted to apply the squash algorithm to such a stochastic gradient step, only the norms corresponding to Li and Rj would be modified. Hence, in Algorithm 1, if the set of row norms of L and R
is sorted from the previous iteration, we can implement a balanced-tree data structure that allows us
to perform individual updates in amortized logarithmic time. We leave such an implementation to
future work. In the experiments, however, we demonstrate that the proximal point method is still
quite efficient and fast when dealing with stochastic gradient updates corresponding to medium-size
batches {(i, j)} selected from S, even if a full sort is performed at each squash operation.
6
Numerical Experiments
Matrix Completion. We tested our proximal point and projected gradient methods on the Netflix dataset, which is the largest publicly available collaborative filtering dataset. The training set
contains 100,480,507 ratings from 480,189 anonymous users on 17,770 movie titles. Netflix also
provides a qualification set, containing 1,408,395 ratings. The ?qualification set? pairs were selected
by Netflix from the most recent ratings for a subset of the users. As a baseline, Netflix provided the
test score of its own system trained on the same data, which is 0.9514. This dataset is interesting for
several reasons. First, it is very large, and very sparse (98.8% sparse). Second, the dataset is very
imbalanced, with highly nonuniform samples. It includes users with over 10,000 ratings as well as
users who rated fewer than 5 movies.
5
1.15
Proximal Point
Projected Gradient
1.1
1.05
RMSE
1
Algorithm
f (X)
Proximal Point
Projected Gradient
Trace-norm
Weighted Trace-norm
0.7676
0.7728
-
Qualification
0.95
0.9
0.85
Training
Training RMSE
kXkmax
f (X) +
+ ? kXkmax
2.5549
0.7689
2.2500
0.7739
-
Qual
f (X)
0.9150
0.9138
0.9235
0.9105
0.8
0.75
0
5
10
15
20
25
30
35
40
Number of epochs
Figure 1: Performance of regularization methods on the Netflix dataset.
For the netflix dataset, we will evaluate our algorithms based on the root mean squared error (RMSE)
of their predictions. To this end, the objective we seek to minimize takes the following form:
1 X
2
2
minimizeL,R
(Yij ? L0i Rj )2 + ? max{kLk2,? , kRk2,? }
|S|
(i,j)?S
where S here represents the set of observed user-movie pairs and Yij denote the provided ratings.
For all of our experiments, we learned a factorization L0 R with k = 30 dimensions (factors).
In our experiments, all ratings were normalized to be zero-mean by subtracting 3.6. To
speed up learning, we subdivided the Netflix dataset into minibatches, each containing 100,000
user/movie/rating triplets. Both proximal-point and projected gradient methods performed 40
epochs (or passes through the training set), with parameters {L, R} updated after each minibatch.
For both algorithms we used momentum of 0.9, and a step size of 0.005, which was decreased by
a factor of 0.8 after each epoch. For the proximal-point method, ? was set to 5?10?4 , and for
the projected gradient algorithm, B was set to 2.25. The running times of both algorithms on this
large-scale Netflix dataset is comparable. On a 2.5 GHz Intel Xeon, our implementation of projected
gradient takes 20.1 minutes per epoch, whereas the proximal-point method takes about 19.6 minutes.
Figure 1 shows predictive performance of both the proximal-point and projected gradient algorithms
on the training and qualification set. Observe that the proximal-point algorithm converges considerably faster than projected gradient, but both algorithms achieve a similar RMSE of 0.9150 (proximal
point) and 0.9138 (projected gradient) on the qualification set. Figure 1, left panel, further shows that
the max-norm based regularization significantly outperforms the corresponding trace-norm based
regularization, which is widely used in many large-scale collaborative filtering applications. We
also note that the differences between the max-norm and the weighted trace-norm [7] are rather
small, with the weighted trace-norm slightly outperforming max-norm.
Gset Max-Cut Experiments. In the MAX - CUT problem, we are given a graph G = (V, E), and
we aim to solve the problem
X
minimize
(1 ? xi xj ) subject to x2i = 1 ?i ? V
(i,j)?E
The heralded Goemans-Williamson relaxation [16] converts this problem into a constrained, symmetric max-norm problem:
X
minimize
(1 ? Xij ) subject to kXkmax ? 1, X 0 .
(i,j)?E
In our nonconvex formulation, this optimization becomes
X
2
minimize
(1 ? A0i Aj ) subject to kAk2,? ? 1 .
(i,j)?E
Since the decision variable is symmetric and positive definite, we only need one factor A of size
|V | ? r. In all of our experiments with MAX - CUT type problems, we fixed r = 20. We used a
diminishing step size rule of ?k = ??0k where k is the iteration counter.
6
G22
G35
G36
G58
G60
G67
G70
G72
G77
G81
Primal
Obj.
14128.5
8007.4
7998.3
20116.6
15207.0
7736.4
9851.51
7800.4
11034.1
15639.6
Time
(.1%)
0.6
0.5
0.5
2
2.1
21.4
8.7
13.8
18.6
28.4
Iterations
(.1%)
150
200
200
300
400
2050
1700
2250
2150
2200
Time
(1%)
0.4
0.3
0.3
.7
0.29
1.3
.5
.6
.9
1.35
Iterations
(1%)
100
100
100
100
50
100
100
100
100
100
SDPLR
Obj.
14135.7
8014.6
8005.9
20135.90
15221.9
7744.1
9861.2
7808.2
11045.1
15655.2
SDPLR
Time
3
4
7
29
6
15
21
15
20
33
|V |
2000
2000
2000
5000
7000
10000
10000
10000
14000
20000
|E|
19990
11778
11766
29570
17148
20000
9999
20000
28000
40000
Table 1: Performance of projected gradient on Gset graphs. Columns show primal objective within .1% of
optimal, running time for .1% of optimal, number of iterations to reach .1% of optimal, running time for 1% of
optimal, number of iterations to reach 1% of optimal, primal objective using SDPLR, running time of SDPLR,
number of vertices, and number of edges. In our experiments, we set ?0 = 1.
(a) Spectral Clustering
(b) Max-cut clustering
Figure 2: Comparison of spectral clustering (left) with
MAX - CUT
clustering (right).
We tested our projected gradient algorithm on graphs drawn from the Gset, a collection of graphs
designed for testing the efficacy of max-cut algorithms [17]. The results for a subset of these appears
in Table 1 along with a comparison against a C implementation of Burer?s SDPLR code which has
been optimized for the particular structure of the MAX - CUT problem [18]. On the same modern
hardware, a Matlab implementation of our projected gradient method can reach .1% of the optimal
value faster than the optimized and compiled SDPLR code.
2-class Clustering Experiments. For the 2-class clustering problem, we first build a K-nearest
neighbor
graph with
K = 10 and weights wij defined as wij = max(si (j), sj (i)), with si (j) =
||x ?x ||2
exp ? i2?2j
and ?i equal to the distance from xi to its Kth closest neighbor. We then choose
i
a scalar ? > 0 and define an inverse similarity adjacency matrix Q by Qij = ? ?Wij . The parameter
? controls the balancing of the clusters, a large value of ? forces the clusters to be of equal size. We
solve the MAX - CUT problem on the graph Q to find our cluster assignments.
As a synthetic example, we generated a ?two moons? dataset consisting of two half-circles in R2
with the bottom half circle shifted to the right by 1/2 and shifted up by 1/2. The data is then
embedded into RD and each embedded component is corrupted with Gaussian
noise with variance
?
? 2 . For the two moons experiments, we fix D = 100, n = 2000 and ? = .02 as done in [19]. The
parameters are set to ? = .01 and ?0 = 3/2; the algorithm was executed for 1500 iterations. For
the clustering experiments, we repeat the randomized rounding technique [16] for 100 trials, and we
choose the rounding with highest primal objective.
We compare our MAX - CUT clusterings with the spectral clustering method [20] and the Total Variation Graph Cut algorithm [19]. Figure 2 shows the clustering results for spectral clustering and
maxcut clustering. In all the trials, spectral clustering incorrectly clustered the two ends of both
half-circles. For the clustering problems, the two measures of performance we consider are misclassification
P error rate (number of misclassified points divided by n) and cut cost. The cut cost is
defined as i?V1 ,j?V2 Wij . The MAX - CUT clustering obtained smaller misclassification error in 98
of the 100 trials we performed and smaller cut cost in every trial.
On the MNIST database, we build the 10-NN graph described above on the digits 4 and 9, where
we set ? = .001 and r = 8. The NN-graph is of size 14, 000 and the MAX - CUT algorithm takes
7
Two Moons
MNIST 4 and 9
MNIST 3 and 5
Error Rate
0.053
0.021
0.016
max-cut
Cost
Time
311.9
13
1025.5
90
830.9
53
min(|V1 |,|V2 |)
|V1 |+|V2 |
.495
.493
.476
spectral
Error Rate
Cost
0.171
387.8
0.458
1486.5
0.092
2555.1
Table 2: Clustering results. Error rate, cut cost, and running time comparison for
total variation (TV) algorithms. The balance of the cut is computed as
are averaged over 100 trials.
TV
Error Rate
0.082
N/A
N/A
MAX - CUT ,
min(|V1 |,|V2 |)
.
|V1 |+|V2 |
spectral, and
The two moons results
approximately 1 minute to run 1,000 iterations. The same procedure is repeated for the digits 3 and
5. The results are shown in Table 2. Our MAX - CUT clustering algorithm again performs substantially
better than the spectral method.
7
Summary
In this paper we presented practical methods for solving very large scale optimization problems
involving a max-norm constraint or regularizer. Using this approaches, we showed evidence that
the max-norm can often be superior to established techniques such as trace-norm regularization and
spectral clustering, supplementing previous evidence on small-scale problems. We hope that the
increasing evidence of the utility of max-norm regularization, combined with the practical optimization techniques we present here, will reignite interest in using the max-norm for various machine
learning applications.
Acknowledgements
RS supported by NSERC, Shell, and NTT Communication Sciences Laboratory. JAT supported by
ONR award N00014-08-1-0883, DARPA award N66001-08-1-2065, and AFOSR award FA955009-1-0643. JL thanks TTI Chicago for hosting him while this work was completed.
A
Proof of the correctness of squash
Rewrite (7) as the constrained optimization
minimizeW ,t
subject to
Pd
2
i=1 kwi ? vi k + ?t
2
kwi k ? t for 1 ? i ?
d
Forming a Lagrangian with a vector of Lagrange multipliers p ? 0
L(W , t, p) =
d
X
kwi ? vi k2 + ?t +
d
X
i=1
pi (kwi k2 ? t) ,
i=1
1
v ,
1+pi i
2
the KKT conditions for this problem thus read (a) wi =
(b) p ? 0, (c)
for 1 ? i ? d, (e) pi > 0 =? kwi k2 = t, and (f) kwi k < t =? pi = 0.
Pd
i=1
pi = ?, (d) kwi k2 ? t
With our candidate W = squash(V , ?), we need only find t and p to verify the optimality conditions. Let ?
be as in Algorithm 1 and set t = ? 2 and
(
kvk k
? 1 1 ? ?(k) ? q
?
pk =
0
otherwise
This definition of p immediately gives (a). For (b), note that by the definition of q, kvk k ? ? for 1 ? ?(k) ? q.
Thus, p ? 0. Moreover,
P
d
X
1??(k)?q kvk k
pk =
?q =q+??q =?,
?
k=1
yielding (c). Also, by construction, kwk k = ? if ?(k) ? q verifying (e). Finally, again by the definition of q,
we have
q+1
X
1
1
?+q
kv?(q+1) k <
kv?(k) k =
kv?(q+1) k +
?
?+q+1
?+q+1
?+q+1
k=1
which implies kv?(q+1) k < ?. Since kvk k ? kv?(q+1) k for ?(k) > q, this gives (d) and the slackness
condition (f).
8
References
[1] Nathan Srebro, Jason Rennie, and Tommi Jaakkola. Maximum margin matrix factorization. In Advances
in Neural Information Processing Systems, 2004.
[2] Samuel Burer and R. D. C. Monteiro. A nonlinear programming algorithm for solving semidefinite
programs via low-rank factorization. Mathematical Programming (Series B), 95:329?357, 2003.
[3] Benjamin Recht, Maryam Fazel, and Pablo Parrilo. Guaranteed minimum rank solutions of matrix
equations via nuclear norm minimization. SIAM Review, 2007. To appear. Preprint Available at
http://pages.cs.wisc.edu/?brecht/publications.html.
[4] Francis R. Bach, Julien Marial, and Jean Ponce. Convex sparse matrix factorizations. Preprint available
at arxiv.org/abs/0812.1869, 2008.
[5] Nathan Srebro and Adi Shraibman. Rank, trace-norm and max-norm. In 18th Annual Conference on
Learning Theory (COLT), 2005.
[6] G. J. O. Jameson. Summing and Nuclear Norms in Banach Space Theory. Number 8 in London Mathematical Society Student Texts. Cambridge University Press, Cambridge, UK, 1987.
[7] Ruslan Salakhutdinov and Nathan Srebro. Collaborative filtering in a non-uniform world: Learning with
the weighted trace norm. Preprint available at arxiv.org/abs/1002.2780, 2010.
[8] Masao Fukushima and Hisashi Mine. A generalized proximal point algorithm for certain non-convex
minimization problems. International Journal of Systems Science, 12(8):989?1000, 1981.
[9] Samuel Burer and Changhui Choi. Computational enhancements in low-rank semidefinite programming.
Optimization Methods and Software, 21(3):493?512, 2006.
[10] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 2nd edition, 1999.
[11] T Hale, W Yin, and Y Zhang. A fixed-point continuation method for l 1-regularized minimization with
applications to compressed sensing. Dept. Computat. Appl. Math., Rice Univ., Houston, TX, Tech. Rep.
TR07-07, 2007.
[12] Stephen J. Wright, Robert Nowak, and M?ario A. T. Figueiredo. Sparse reconstruction by separable approximation. Journal version, to appear in IEEE Transactions on Signal Processing. Preprint available at
http:http://www.optimization-online.org/DB_HTML/2007/10/1813.html, 2007.
[13] Jian-Feng Cai, Emmanuel J. Cand`es, and Zuowei Shen. A singular value thresholding algorithm for
matrix completion. To appear in SIAM J. on Optimization. Preprint available at http://arxiv.org/
abs/0810.3286, 2008.
[14] Shiqian Ma, Donald Goldfarb, and Lifeng Chen. Fixed point and Bregman iterative methods for matrix
rank minimization. Preprint available at http://www.optimization-online.org/DB_HTML/
2008/11/2151.html, 2008.
[15] Yurii Nesterov. Gradient methods for minimizing composite objective function. To appear. Preprint
Available at http://www.optimization-online.org/DB_HTML/2007/09/1784.html,
September 2007.
[16] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42:1115?1145, 1995.
[17] The Gset is available for download at http://www.stanford.edu/?yyye/yyye/Gset/.
[18] Samuel Burer. Sdplr. Software available at http://dollar.biz.uiowa.edu/?sburer/www/
doku.php?id=software#sdplr.
[19] Arthur Szlam and Xavier Bresson. A total variation-based graph clustering algorithm for cheeger
ratio cuts. To appear in ICML 2010. Preprint available at ftp://ftp.math.ucla.edu/pub/
camreport/cam09-68.pdf, 2010.
[20] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8):888?905, 2000.
9
| 4124 |@word trial:5 version:1 polynomial:1 norm:68 nd:1 heuristically:1 seek:1 linearized:1 r:1 decomposition:3 prominence:1 pick:1 dramatic:1 initial:1 substitution:1 contains:2 score:1 efficacy:1 series:1 tabulate:1 pub:1 outperforms:3 current:2 com:1 ka:1 si:2 belmont:1 chicago:2 numerical:2 wanted:1 designed:1 update:3 stationary:1 half:3 selected:2 fewer:1 intelligence:1 desktop:1 ith:1 lr:2 provides:4 math:2 preference:2 org:6 zhang:1 mathematical:4 along:1 qij:1 kuj:2 indeed:2 cand:1 sdp:2 multi:2 brain:1 salakhutdinov:2 klk2:6 supplementing:1 becomes:1 project:1 provided:2 bounded:1 moreover:2 notation:2 medium:1 panel:1 increasing:1 what:1 minimizes:1 substantially:1 shraibman:1 guarantee:3 every:1 k2:14 jianbo:1 uk:1 control:1 szlam:1 appear:5 bertsekas:1 positive:6 before:1 engineering:1 local:1 qualification:5 doku:1 jtropp:1 consequence:1 ak:14 id:1 approximately:1 might:1 plus:2 equivalence:1 suggests:1 challenging:1 appl:1 ease:1 factorization:8 obeys:1 averaged:1 fazel:1 practical:4 unique:1 testing:1 definite:2 implement:2 sq:1 procedure:2 digit:2 foundational:1 area:1 empirical:1 significantly:1 composite:1 projection:5 donald:1 onto:1 interior:1 uiowa:1 descending:1 www:5 equivalent:2 lagrangian:1 shi:1 pursues:1 attention:1 convex:7 survey:1 shen:1 decomposable:1 immediately:2 factored:3 rule:4 utilizing:1 nuclear:3 variation:3 updated:1 construction:1 user:6 programming:5 us:1 element:1 amortized:1 particularly:2 cut:27 database:1 observed:1 bottom:1 preprint:8 solved:2 verifying:1 fa955009:1 decrease:2 technological:1 counter:3 highest:1 balanced:1 benjamin:2 intuition:1 pd:2 complexity:3 cheeger:1 nesterov:3 mine:4 trained:1 solving:5 rewrite:1 predictive:1 darpa:1 chapter:1 various:1 tx:1 regularizer:7 derivation:1 stacked:1 univ:1 fast:2 describe:3 effective:2 london:1 query:1 whose:1 quite:1 stanford:2 solve:8 widely:2 say:1 rennie:1 otherwise:3 squash:13 compressed:1 noisy:1 online:4 sequence:2 advantage:1 cai:1 reconstruction:1 subtracting:1 maryam:1 product:3 achieve:1 frobenius:1 kv:7 convergence:1 cluster:3 optimum:2 enhancement:1 produce:1 leave:1 converges:1 tti:1 ftp:2 completion:3 nearest:1 received:1 c:2 involves:1 implies:1 tommi:1 direction:1 g22:1 stochastic:8 adjacency:1 require:4 subdivided:1 fix:2 generalization:1 clustered:1 anonymous:1 proposition:2 refute:1 yij:4 hold:1 sufficiently:2 wright:1 exp:1 algorithmic:2 smallest:1 ruslan:2 currently:1 superposition:1 title:1 him:1 largest:2 correctness:1 weighted:4 minimization:6 hope:1 mit:1 gaussian:1 aim:1 modified:1 rather:2 reaching:1 jaakkola:1 publication:1 l0:2 focus:1 ponce:1 improvement:1 rank:15 prevalent:1 tech:1 baseline:1 dollar:1 nn:2 a0:1 diminishing:1 wij:4 misclassified:1 monteiro:5 arg:1 fidelity:1 html:4 colt:1 yahoo:1 constrained:5 equal:6 construct:1 sampling:1 kw:2 represents:1 look:2 k2f:4 icml:1 promote:2 future:1 minimized:1 develops:1 few:4 employ:2 modern:2 comprehensive:1 individual:2 maxj:2 replaced:1 consisting:1 fukushima:4 ab:3 huge:3 interest:1 highly:1 multiply:1 joel:1 evaluation:1 kvk:5 semidefinite:6 yielding:1 behind:1 primal:4 a0i:1 bregman:1 edge:1 nowak:1 arthur:1 tree:1 euclidean:1 re:1 circle:3 theoretical:1 instance:2 column:6 xeon:1 linesearch:1 bresson:1 assignment:1 cost:12 vertex:1 entry:1 subset:3 hundred:2 uniform:1 successful:1 rounding:2 corrupted:1 proximal:18 considerably:1 synthetic:1 combined:1 recht:2 thanks:1 fundamental:1 randomized:1 siam:2 international:1 csail:1 lee:1 kvj:2 w1:3 squared:1 again:2 satisfied:1 containing:2 choose:2 shiqian:1 cognitive:1 dimitri:1 li:2 parrilo:1 minimizew:2 sec:1 student:1 includes:1 hisashi:1 jitendra:1 explicitly:1 kzk2:1 vi:2 performed:3 root:1 jason:2 closed:4 kwk:1 francis:1 netflix:8 sort:3 complicated:1 rmse:4 collaborative:10 minimize:6 square:1 php:1 publicly:1 moon:4 variance:1 who:1 efficiently:1 identification:1 produced:1 researcher:1 explain:1 reach:3 email:5 definition:3 against:2 proof:2 db_html:3 proved:1 dataset:9 massachusetts:1 popular:1 dimensionality:1 satisfiability:1 segmentation:1 back:1 appears:1 improved:1 formulation:7 done:1 just:1 until:1 hand:3 tropp:1 nonlinear:2 minibatch:1 slackness:1 aj:1 qual:1 biz:1 vj0:2 believe:1 scientific:1 concept:1 verify:2 normalized:2 counterpart:1 multiplier:1 regularization:10 hence:1 xavier:1 read:1 symmetric:2 laboratory:1 goldfarb:1 i2:1 encourages:1 a2jk:1 anything:1 kak:3 samuel:3 generalized:1 pdf:1 outline:1 complete:1 demonstrate:4 performs:1 image:1 recently:1 superior:4 empirically:2 million:1 jl:1 banach:1 measurement:1 cambridge:2 ai:1 rd:1 trivially:1 l0i:3 similarly:1 tracenorm:1 maxcut:1 similarity:1 compiled:1 multivariate:1 own:1 recent:2 showed:2 perspective:1 imbalanced:1 inf:3 closest:1 masao:1 scenario:1 certain:2 nonconvex:3 n00014:1 outperforming:1 onr:1 rep:1 yi:1 devise:1 caltech:1 minimum:2 houston:1 zuowei:1 r0:1 converge:2 signal:1 stephen:1 full:2 rj:6 reduces:1 smooth:3 ntt:1 faster:2 burer:8 offer:2 long:2 bach:1 heralded:1 divided:1 award:3 promotes:2 coded:1 jean:1 prediction:1 scalable:1 involving:4 basic:1 metric:1 arxiv:3 iteration:13 sometimes:1 achieved:1 whereas:1 decreased:1 singular:4 jian:1 w2:3 pass:1 kwi:7 subject:7 elegant:1 mature:1 seem:1 obj:2 call:1 integer:1 exceed:1 easy:1 variety:2 iterate:2 xj:1 tr07:1 brecht:2 approaching:1 opposite:1 idea:3 utility:1 penalty:3 reformulated:1 xjj:1 hessian:1 matlab:2 tol:2 detailed:1 hosting:1 amount:1 hardware:1 reduced:1 http:8 continuation:1 outperform:1 xij:1 computat:1 shifted:2 kxktr:1 per:1 rephrased:1 ario:1 key:1 terminology:1 enormous:1 pb:4 drawn:1 wisc:2 v1:5 n66001:1 graph:12 relaxation:1 sum:2 convert:1 run:1 inverse:1 clipped:1 decision:4 appendix:1 scaling:1 comparable:2 bound:2 guaranteed:2 quadratic:1 nonnegative:1 annual:1 constraint:4 bibliography:1 software:3 ucla:1 nathan:4 speed:1 min:6 optimality:1 separable:1 department:1 tv:2 smaller:2 slightly:2 wi:1 appealing:1 rsalakhu:1 computationally:2 equation:1 describing:1 discus:1 serf:1 end:2 yurii:1 available:11 operation:3 promoting:1 apply:5 observe:1 v2:5 appropriate:2 spectral:10 alternative:3 batch:1 denotes:2 clustering:23 ensure:2 include:1 top:1 running:5 vigor:1 completed:1 madison:1 emmanuel:1 k1:2 uj:2 establish:1 classical:1 build:2 unchanged:1 society:1 tensor:1 objective:5 feng:1 already:1 malik:1 kak2:5 september:1 gradient:26 kth:1 distance:1 athena:1 evaluate:1 reason:1 code:2 index:1 ratio:1 balance:1 minimizing:1 unfortunately:2 executed:1 robert:1 trace:20 jat:1 implementation:6 perform:1 observation:1 commensurate:1 datasets:1 jameson:1 descent:3 incorrectly:1 flop:3 communication:1 nonuniform:2 download:1 ttic:1 rating:7 pablo:1 pair:4 namely:1 optimized:2 california:1 learned:1 established:1 address:1 suggested:1 below:2 pattern:2 challenge:1 program:5 max:55 including:1 power:1 critical:3 misclassification:2 natural:1 force:2 regularized:2 solvable:1 mn:1 movie:4 technology:2 rated:1 x2i:1 julien:1 grothendieck:2 text:1 epoch:4 literature:2 acknowledgement:1 nati:1 python:1 review:1 wisconsin:1 embedded:2 loss:2 expect:1 permutation:1 afosr:1 interesting:2 filtering:10 srebro:4 versus:1 localized:1 foundation:1 minimizel:1 sufficient:1 article:1 dd:2 thresholding:1 pi:5 balancing:2 row:11 course:1 summary:1 repeat:2 supported:2 jth:2 figueiredo:1 appreciated:1 institute:4 wide:1 neighbor:2 sparse:4 ghz:1 dimension:1 world:1 computes:1 kz:1 collection:1 projected:17 far:1 transaction:2 sj:1 dealing:2 global:2 kkt:2 reveals:1 maxnorm:2 summing:1 xi:2 search:2 continuous:1 iterative:1 triplet:1 sk:2 decomposes:1 table:4 ku:1 lr0:8 adi:1 williamson:2 constructing:1 diag:2 pk:3 noise:1 arise:1 edition:1 repeated:1 kxkmax:10 intel:1 momentum:1 explicit:1 candidate:1 krk2:6 toyota:1 minz:1 formula:1 minute:3 choi:1 misconception:1 hale:1 sensing:1 r2:1 admits:2 evidence:5 intractable:1 consist:1 mnist:3 mirror:1 magnitude:1 margin:1 nk:2 sorting:1 chen:1 logarithmic:1 backtracking:1 yin:1 forming:1 lagrange:1 expressed:1 nserc:1 scalar:2 acm:2 minibatches:1 shell:1 ma:2 rice:1 goal:1 kli:2 sorted:1 replace:3 lipschitz:2 except:1 uniformly:1 called:2 total:3 goemans:2 experimental:1 e:1 select:1 searched:1 armijo:2 incorporate:1 dept:1 tested:2 |
3,451 | 4,125 | A Dirty Model for Multi-task Learning
Ali Jalali
University of Texas at Austin
[email protected]
Pradeep Ravikumar
University of Texas at Asutin
[email protected]
Sujay Sanghavi
University of Texas at Austin
[email protected]
Chao Ruan
University of Texas at Austin
[email protected]
Abstract
We consider multi-task learning in the setting of multiple linear regression, and
where some relevant features could be shared across the tasks. Recent research
has studied the use of ?1 /?q norm block-regularizations with q > 1 for such blocksparse structured problems, establishing strong guarantees on recovery even under
high-dimensional scaling where the number of features scale with the number of
observations. However, these papers also caution that the performance of such
block-regularized methods are very dependent on the extent to which the features
are shared across tasks. Indeed they show [8] that if the extent of overlap is less
than a threshold, or even if parameter values in the shared features are highly
uneven, then block ?1 /?q regularization could actually perform worse than simple separate elementwise ?1 regularization. Since these caveats depend on the
unknown true parameters, we might not know when and which method to apply.
Even otherwise, we are far away from a realistic multi-task setting: not only do the
set of relevant features have to be exactly the same across tasks, but their values
have to as well.
Here, we ask the question: can we leverage parameter overlap when it exists,
but not pay a penalty when it does not ? Indeed, this falls under a more general
question of whether we can model such dirty data which may not fall into a single
neat structural bracket (all block-sparse, or all low-rank and so on). With the
explosion of such dirty high-dimensional data in modern settings, it is vital to
develop tools ? dirty models ? to perform biased statistical estimation tailored
to such data. Here, we take a first step, focusing on developing a dirty model
for the multiple regression problem. Our method uses a very simple idea: we
estimate a superposition of two sets of parameters and regularize them differently.
We show both theoretically and empirically, our method strictly and noticeably
outperforms both ?1 or ?1 /?q methods, under high-dimensional scaling and over
the entire range of possible overlaps (except at boundary cases, where we match
the best method).
1
Introduction: Motivation and Setup
High-dimensional scaling. In fields across science and engineering, we are increasingly faced with
problems where the number of variables or features p is larger than the number of observations n.
Under such high-dimensional scaling, for any hope of statistically consistent estimation, it becomes
vital to leverage any potential structure in the problem such as sparsity (e.g. in compressed sensing [3] and LASSO [14]), low-rank structure [13, 9], or sparse graphical model structure [12]. It is in
such high-dimensional contexts in particular that multi-task learning [4] could be most useful. Here,
1
multiple tasks share some common structure such as sparsity, and estimating these tasks jointly by
leveraging this common structure could be more statistically efficient.
Block-sparse Multiple Regression. A common multiple task learning setting, and which is the focus
of this paper, is that of multiple regression, where we have r > 1 response variables, and a common
set of p features or covariates. The r tasks could share certain aspects of their underlying distributions, such as common variance, but the setting we focus on in this paper is where the response
variables have simultaneously sparse structure: the index set of relevant features for each task is
sparse; and there is a large overlap of these relevant features across the different regression problems. Such ?simultaneous sparsity? arises in a variety of contexts [15]; indeed, most applications
of sparse signal recovery in contexts ranging from graphical model learning, kernel learning, and
function estimation have natural extensions to the simultaneous-sparse setting [12, 2, 11].
It is useful to represent the multiple regression parameters via a matrix, where each column corresponds to a task, and each row to a feature. Having simultaneous sparse structure then corresponds
to the matrix being largely ?block-sparse? ? where each row is either all zero or mostly non-zero,
and the number of non-zero rows is small. A lot of recent research in this setting has focused on
?1 /?q norm regularizations, for q > 1, that encourage the parameter matrix to have such blocksparse structure. Particular examples include results using the ?1 /?? norm [16, 5, 8], and the ?1 /?2
norm [7, 10].
Dirty Models. Block-regularization is ?heavy-handed? in two ways. By strictly encouraging sharedsparsity, it assumes that all relevant features are shared, and hence suffers under settings, arguably
more realistic, where each task depends on features specific to itself in addition to the ones that are
common. The second concern with such block-sparse regularizers is that the ?1 /?q norms can be
shown to encourage the entries in the non-sparse rows taking nearly identical values. Thus we are
far away from the original goal of multitask learning: not only do the set of relevant features have
to be exactly the same, but their values have to as well. Indeed recent research into such regularized
methods [8, 10] caution against the use of block-regularization in regimes where the supports and
values of the parameters for each task can vary widely. Since the true parameter values are unknown,
that would be a worrisome caveat.
We thus ask the question: can we learn multiple regression models by leveraging whatever overlap
of features there exist, and without requiring the parameter values to be near identical? Indeed this
is an instance of a more general question on whether we can estimate statistical models where the
data may not fall cleanly into any one structural bracket (sparse, block-sparse and so on). With
the explosion of dirty high-dimensional data in modern settings, it is vital to investigate estimation
of corresponding dirty models, which might require new approaches to biased high-dimensional
estimation. In this paper we take a first step, focusing on such dirty models for a specific problem:
simultaneously sparse multiple regression.
Our approach uses a simple idea: while any one structure might not capture the data, a superposition
of structural classes might. Our method thus searches for a parameter matrix that can be decomposed
into a row-sparse matrix (corresponding to the overlapping or shared features) and an elementwise
sparse matrix (corresponding to the non-shared features). As we show both theoretically and empirically, with this simple fix we are able to leverage any extent of shared features, while allowing
disparities in support and values of the parameters, so that we are always better than both the Lasso
or block-sparse regularizers (at times remarkably so).
The rest of the paper is organized as follows: In Sec 2. basic definitions and setup of the problem
are presented. Main results of the paper is discussed in sec 3. Experimental results and simulations
are demonstrated in Sec 4.
Notation: For any matrix M , we denote its j th row as Mj , and its k-th column as M (k) . The set
of all non-zero rows (i.e. all rows with at least one non-zero element) is denoted by RowSupp(M )
P
(k)
and its support by Supp(M ). Also, for any matrix M , let kM k1,1 := j,k |Mj |, i.e. the sums of
P
(k)
absolute values of the elements, and kM k1,? := j kMj k? where, kMj k? := maxk |Mj |.
2
2
Problem Set-up and Our Method
Multiple regression. We consider the following standard multiple linear regression model:
y (k) = X (k) ??(k) + w(k) ,
k = 1, . . . , r,
where y (k) ? Rn is the response for the k-th task, regressed on the design matrix X (k) ? Rn?p
(possibly different across tasks), while w(k) ? Rn is the noise vector. We assume each w(k) is
drawn independently from N (0, ? 2 ). The total number of tasks or target variables is r, the number
of features is p, while the number of samples we have for each task is n. For notational convenience,
? ? Rp?r for the regression
we collate these quantities into matrices Y ? Rn?r for the responses, ?
n?r
parameters and W ? R
for the noise.
? from data by leverDirty Model. In this paper we are interested in estimating the true parameter ?
? would have
aging any (unknown) extent of simultaneous-sparsity. In particular, certain rows of ?
many non-zero entries, corresponding to features shared by several tasks (?shared? rows), while
certain rows would be elementwise sparse, corresponding to those features which are relevant for
some tasks but not all (?non-shared rows?), while certain rows would have all zero entries, correb that
sponding to those features that are not relevant to any task. We are interested in estimators ?
automatically adapt to different levels of sharedness, and yet enjoy the following guarantees:
b successfully recovers the true signed support if
Support recovery: We say an estimator ?
b
?
sign(Supp(?)) = sign(Supp(?)). We are interested in deriving sufficient conditions under which
?
the estimator succeeds. We note that this is stronger than merely recovering the row-support of ?,
which is union of its supportsSfor the different tasks. In particular, denoting Uk for the support of the
? and U =
k-th column of ?,
k Uk .
Error bounds: We are also interested in providing bounds on the elementwise ?? norm error of the
b
estimator ?,
b ? ?k
? ? = max
k?
b (k) ? (k)
max ?
? ?j .
j
j=1,...,p k=1,...,r
2.1
Our Method
Our method explicitly models the dirty block-sparse structure. We estimate a sum of two parameter
matrices B and S with different regularizations for each: encouraging block-structured row-sparsity
in B and elementwise sparsity in S. The corresponding ?clean? models would either just use blocksparse regularizations [8, 10] or just elementwise sparsity regularizations [14, 18], so that either
method would perform better in certain suited regimes. Interestingly, as we will see in the main
results, by explicitly allowing to have both block-sparse and elementwise sparse component, we are
?
able to outperform both classes of these ?clean models?, for all regimes ?.
Algorithm 1 Dirty Block Sparse
Solve the following convex optimization problem:
b B)
b ? arg min
(S,
S,B
b =B
b + S.
b
Then output ?
3
r
2
1 X
(k)
(k)
S (k) + B (k)
+ ?s kSk1,1 + ?b kBk1,? .
y ? X
2n
2
(1)
k=1
Main Results and Their Consequences
We now provide precise statements of our main results. A number of recent results have shown that
the Lasso [14, 18] and ?1 /?? block-regularization [8] methods succeed in recovering signed supports with controlled error bounds under high-dimensional scaling regimes. Our first two theorems
extend these results to our dirty model setting. In Theorem 1, we consider the case of deterministic
design matrices X (k) , and provide sufficient conditions guaranteeing signed support recovery, and
elementwise ?? norm error bounds. In Theorem 2, we specialize this theorem to the case where the
3
rows of the design matrices are random from a general zero mean Gaussian distribution: this allows
us to provide scaling on the number of observations required in order to guarantee signed support
recovery and bounded elementwise ?? norm error.
Our third result is the most interesting in that it explicitly quantifies the performance gains of our
method vis-a-vis Lasso and the ?1 /?? block-regularization method. Since this entailed finding the
precise constants underlying earlier theorems, and a correspondingly more delicate analysis, we
follow Negahban and Wainwright [8] and focus on the case where there are two-tasks (i.e. r = 2),
and where we have standard Gaussian design matrices as in Theorem 2. Further, while each of two
tasks depends on s features, only a fraction ? of these are common. It is then interesting to see how
the behaviors of the different regularization methods vary with the extent of overlap ?.
Comparisons. Negahban and Wainwright [8] show that there is actually a ?phase transition? in the
scaling of the probability of successful signed support-recovery with the number of observations.
n
Denote a particular rescaling of the sample-size ?Lasso (n, p, ?) = s log(p?s)
. Then as Wainwright
[18] show, when the rescaled number of samples scales as ?Lasso > 2 + ? for any ? > 0, Lasso
succeeds in recovering the signed support of all columns with probability converging to one. But
when the sample size scales as ?Lasso < 2?? for any ? > 0, Lasso fails with probability converging
to one. For the ?1 /?? -reguralized multiple linear regression, define a similar rescaled sample size
n
?1,? (n, p, ?) = s log(p?(2??)s)
. Then as Negahban and Wainwright [8] show there is again a
transition in probability of success from near zero to near one, at the rescaled sample size of ?1,? =
(4 ? 3?). Thus, for ? < 2/3 (?less sharing?) Lasso would perform better since its transition is at
a smaller sample size, while for ? > 2/3 (?more sharing?) the ?1 /?? regularized method would
perform better.
As we show in our third theorem, the phase transition for our method occurs at the rescaled sample
size of ?1,? = (2 ? ?), which is strictly before either the Lasso or the ?1 /?? regularized method
except for the boundary cases: ? = 0, i.e. the case of no sharing, where we match Lasso, and for
? = 1, i.e. full sharing, where we match ?1 /?? . Everywhere else, we strictly outperform both
methods. Figure 3 shows the empirical performance of each of the three methods; as can be seen,
they agree very well with the theoretical analysis. (Further details in the experiments Section 4).
3.1
Sufficient Conditions for Deterministic Designs
We first consider the case where the design matrices X (k) for k = 1, ? ? ?, r are deterministic,
and start by specifying the assumptions we impose on the model. We note that similar sufficient
conditions for the deterministic X (k) ?s case were imposed in papers analyzing Lasso [18] and
block-regularization methods [8, 10].
(k)
A0 Column Normalization
Xj
?
2
?
2n for all j = 1, . . . , p, k = 1, . . . , r.
? and U = S Uk denote the union of
Let Uk denote the support of the k-th column of ?,
k
supports for each task. Then we require that
r
E?1
D
X
(k)
(k)
(k)
(k)
> 0.
A1 Incoherence Condition ?b := 1 ? maxc
XUk , XUk
Xj , XUk
j?U
1
k=1
D
E?1
E D
(k)
(k)
(k)
(k)
.
c
XUk , XUk
We will also find it useful to define ?s := 1?max1?k?r maxj?Uk
Xj , XUk
1
Note that by the incoherence condition A1, wehave ?s > 0.
1 D (k) (k) E
A2 Eigenvalue Condition Cmin := min ?min
XUk , XUk
> 0.
1?k?r
n
?1
E
D
1
(k)
(k)
XUk , XUk
A3 Boundedness Condition Dmax := max
< ?.
1?k?r
n
?,1
Further, we require the regularization penalties be set as
p
2(2 ? ?s )? log(pr)
?
?s >
?s n
and
4
p
2(2 ? ?b )? log(pr)
?
?b >
.
?b n
(2)
1
0.9
0.8
0.8
Dirty Model
L1/Linf Reguralizer
Probability of Success
Probability of Success
1
0.9
0.7
0.6
0.5
0.4
LASSO
0.3
0.2
0
0.5
1
1.5
1.7
2
2.5
Control Parameter ?
3
3.1
3.5
0.6
0.5
0.4
L1/Linf Reguralizer
0.3
LASSO
0.2
p=128
p=256
p=512
0.1
Dirty Model
0.7
p=128
p=256
p=512
0.1
0
0.5
4
1
1.333
(a) ? = 0.3
1.5
2
Control Parameter ?
(b) ? =
2.5
3
2
3
1
0.9
Dirty Model
Probability of Success
0.8
0.7
L1/Linf
Reguralizer
0.6
0.5
LASSO
0.4
0.3
0.2
p=128
p=256
p=512
0.1
0
0.5
1
1.2
1.5
1.6
Control Parameter ?
2
2.5
(c) ? = 0.8
Figure 1: Probability of success in recovering the true signed support using dirty model, Lasso and ?1 /??
regularizer. For a 2-task problem, the probability of success for different values of feature-overlap fraction ?
is plotted. As we can see in the regimes that Lasso is better than, as good as and worse than ?1 /?? regularizer
((a), (b) and (c) respectively), the dirty model outperforms both of the methods, i.e., it requires less number of
observations for successful recovery of the true signed support compared to Lasso and ?1 /?? regularizer. Here
p
? always.
s = ? 10
b from our algorithm with regularTheorem 1. Suppose A0-A3 hold, and that we obtain estimate ?
ization parameters chosen according to (2). Then, with probability at least 1 ? c1 exp(?c2 n) ? 1,
we are guaranteed that the convex program (1) has a unique optimum and
b has no false inclusions, and has bounded ?? norm error so that
(a) The estimate ?
b ? Supp(?),
?
Supp(?)
b ? ?k
? ?,? ?
and k?
b = sign Supp(?)
? provided that
(b) sign(Supp(?))
r
4? 2 log (pr)
+ ?s Dmax .
n Cmin
|
{z
}
bmin
min
?
(j,k)?Supp(?)
?(k)
?j > bmin .
Here the positive constants c1 , c2 depend only on ?s , ?b , ?s , ?b and ?, but are otherwise independent
of n, p, r, the problem dimensions of interest.
Remark: Condition (a) guarantees that the estimate will have no false inclusions; i.e. all included
features will be relevant. If in addition, we require that it have no false exclusions and that recover
the support exactly, we need to impose the assumption in (b) that the non-zero elements are large
enough to be detectable above the noise.
3.2
General Gaussian Designs
Often the design matrices consist of samples from a Gaussian ensemble. Suppose that for each task
(k)
k = 1, . . . , r the design matrix X (k) ? Rn?p is such that each row Xi ? Rp is a zero-mean
Gaussian random vector with covariance matrix ?(k) ? Rp?p , and is independent of every other
(k)
row. Let ?V,U ? R|V|?|U | be the submatrix of ?(k) with rows corresponding to V and columns to
U . We require these covariance matrices to satisfy
the following conditions:
C1 Incoherence Condition ?b := 1 ? maxc
j?U
r
X
(k) (k) ?1
>0
?
j,Uk , ?Uk ,Uk
1
k=1
5
C2 Eigenvalue Condition Cmin := min ?min ?(k)
Uk ,Uk
1?k?r
is bounded away from zero.
(k)
C3 Boundedness Condition Dmax :=
?Uk ,Uk
?1
?,1
> 0 so that the minimum eigenvalue
< ?.
These conditions are analogues of the conditions for deterministic designs; they are now imposed
on the covariance matrix of the (randomly generated) rows of the design matrix.
Further, defining s := maxk |Uk |, we require the regularization penalties be set as
1/2
4? 2 Cmin log(pr)
p
?s > ?
?s nCmin ? 2s log(pr)
and
1/2
4? 2 Cmin r(r log(2) + log(p))
p
?b > ?
.
?b nCmin ? 2sr(r log(2) + log(p))
(3)
Theorem
2. Suppose assumptions C1-C3 hold, and that the number of samples scale as n >
log(pr) 2sr r log(2)+log(p)
b from algorithm (3). Then,
max 2sCmin
. Suppose we obtain estimate ?
?2 ,
Cmin ? 2
s
b
with probability at least 1 ? c1 exp (?c2 (r log(2) + log(p))) ? c3 exp(?c4 log(rs)) ? 1 for some
b is unique and satisfies
positive numbers c1 ? c4 , we are guaranteed that the algorithm estimate ?
the following conditions:
b has no false inclusions, and has bounded ?? norm error so that
(a) the estimate ?
b ? Supp(?),
?
Supp(?)
and
b ? ?k
? ?,? ?
k?
r
50? 2 log(rs)
4s
? + Dmax .
+ ?s
nCmin
Cmin n
|
{z
}
b = sign Supp(?)
? provided that
(b) sign(Supp(?))
3.3
gmin
min
?
(j,k)?Supp(?)
?(k)
?j > gmin .
Sharp Transition for 2-Task Gaussian Designs
This is one of the most important results of this paper. Here, we perform a more delicate and
finer analysis to establish precise quantitative gains of our method. We focus on the special case
where r = 2 and the design matrix has rows generated from the standard Gaussian distribution
N (0, In?n ), so that C1 ? C3 hold, with Cmin = Dmax = 1. As we will see both analytically and
experimentally, our method strictly outperforms both Lasso and ?1 /?? -block-regularization over
for all cases, except at the extreme endpoints of no support sharing (where it matches that of Lasso)
and full support sharing (where it matches that of ?1 /?? ). We now present our analytical results; the
empirical comparisons are presented next in Section 4. The results will be in terms of a particular
rescaling of the sample size n as
?(n, p, s, ?) :=
n
.
(2 ? ?)s log (p ? (2 ? ?)s)
We will also require the assumptions that
F1
?s >
F2
?b >
4? 2 (1 ?
1/2
p
s/n)(log(r) + log(p ? (2 ? ?)s))
(n)1/2 ? (s)1/2 ? ((2 ? ?) s (log(r) + log(p ? (2 ? ?)s)))1/2
1/2
p
4? 2 (1 ? s/n)r(r log(2) + log(p ? (2 ? ?)s))
,
(n)1/2 ? (s)1/2 ? ((1 ? ?/2) sr (r log(2) + log(p ? (2 ? ?)s)))1/2
.
Theorem 3. Consider a 2-task regression problem (n, p, s, ?), where the design matrix has rows
generated from the standard Gaussian distribution N (0, In?n ).
6
Suppose maxj?B? ??(1)
?
j
?(2)
?j = o(?s ), where B ? is the submatrix of ?? with rows where both entries are non-zero.
b of the problem (1) satisfies the following:
Then the estimate ?
(Success) Suppose the regularization coefficients satisfy F1 ? F2. Further, assume that the number
of samples scales as ?(n, p, s, ?) > 1. Then, with probability at least 1 ? c1 exp(?c2 n) for
b satisfies the support-recovery
some positive numbers c1 and c2 , we are guaranteed that ?
and ?? error bound conditions (a-b) in Theorem 2.
? S)
? for any choices of ?s and ?b such that
(Failure) If ?(n,
(B,
p, s, ?) < 1 there is no solution
b = sign Supp(?)
? .
sign Supp(?)
?(1) ?(2)
We note that we require the gap ?j ? ?j to be small only on rows where both entries are
non-zero. As we show in a more general theorem in the appendix, even in the case where the gap is
large, the dependence of the sample scaling on the gap is quite weak.
4
Empirical Results
In this section, we investigate the performance of our dirty block sparse estimator on synthetic and
real-world data. The synthetic experiments explore the accuracy of Theorem 3, and compare our
estimator with LASSO and the ?1 /?? regularizer. We see that Theorem 3 is very accurate indeed.
Next, we apply our method to a real world datasets containing hand-written digits for classification.
Again we compare against LASSO and the ?1 /?? .
(a multi-task regression dataset) with r = 2 tasks. In both of this real world dataset, we show that
dirty model outperforms both LASSO and ?1 /?? practically. For each method, the parameters are
chosen via cross-validation; see supplemental material for more details.
4.1
Synthetic Data Simulation
We consider a r = 2-task regression problem as discussed in Theorem 3, for a range of parameters
(n, p, s, ?). The design matrices X have each entry being i.i.d. Gaussian with mean 0 and variance
1. For each fixed set of (n, s, p, ?), we generate 100 instances of the problem. In each instance,
? are chosen at randomly; each nongiven p, s, ?, the locations of the non-zero entries of the true ?
zero entry is then chosen to be i.i.d. Gaussian with mean 0 and variance 1. n samples are then
generated from this. We then attempt to estimate using three methods: our dirty model, ?1 /??
regularizer and LASSO. In each case, and for each instance, the penalty regularizer coefficients are
found by cross validation. After solving the three problems, we compare the signed support of the
solution with the true signed support and decide whether or not the program was successful in signed
support recovery. We describe these process in more details in this section.
Performance Analysis: We ran the algorithm for five different values of the overlap ratio ? ?
{0.3, 32 , 0.8} with three different number of features p ? {128, 256, 512}. For any instance of the
? has the same sign support as the true ?,
? then we
problem (n, p, s, ?), if the recovered matrix ?
count it as success, otherwise failure (even if one element has different sign, we count it as failure).
As Theorem 3 predicts and Fig 3 shows, the right scaling for the number of oservations is
n
s log(p?(2??)s) , where all curves stack on the top of each other at 2 ? ?. Also, the number of observations required by dirty model for true signed support recovery is always less than both LASSO and
?1 /?? regularizer. Fig 1(a) shows the probability of success for the case ? = 0.3 (when LASSO
is better than ?1 /?? regularizer) and that dirty model outperforms both methods. When ? = 23
(see Fig 1(b)), LASSO and ?1 /?? regularizer performs the same; but dirty model require almost
33% less observations for the same performance. As ? grows toward 1, e.g. ? = 0.8 as shown in
Fig 1(c), ?1 /?? performs better than LASSO. Still, dirty model performs better than both methods
in this case as well.
7
4
p=128
p=256
p=512
Phase Transition Threshold
3.5
L1/Linf Regularizer
3
2.5
LASSO
2
Dirty Model
1.5
1
0
0.1
0.2
0.3
0.4
0.5
0.6
Shared Support Parameter ?
0.7
0.8
0.9
1
Figure 2: Verification of the result of the Theorem 3 on the behavior of phase transition threshold by changing
the parameter ? in a 2-task (n, p, s, ?) problem for dirty model, LASSO and ?1 /?? regularizer. The y-axis
p
n
is s log(p?(2??)s)
, where n is the number of samples at which threshold was observed. Here s = ? 10
?. Our
dirty model method shows a gain in sample complexity over the entire range of sharing ?. The pre-constant in
Theorem 3 is also validated.
n
10
20
40
Average Classification Error
Variance of Error
Average Row Support Size
Average Support Size
Average Classification Error
Variance of Error
Average Row Support Size
Average Support Size
Average Classification Error
Variance of Error
Average Row Support Size
Average Support Size
Our Model
8.6%
0.53%
B:165
B + S:171
S:18
B + S:1651
3.0%
0.56%
B:211
B + S:226
S:34
B + S:2118
2.2%
0.57%
B:270
B + S:299
S:67
B + S:2761
?1 /??
9.9%
0.64%
170
1700
3.5%
0.62%
217
2165
3.2%
0.68%
368
3669
LASSO
10.8%
0.51%
123
539
4.1%
0.68%
173
821
2.8%
0.85%
354
2053
Table 1: Handwriting Classification Results for our model, ?1 /?? and LASSO
Scaling Verification: To verify that the phase transition threshold changes linearly with ? as predicted by Theorem 3, we plot the phase transition threshold versus ?. For five different values of
? ? {0.05, 0.3, 32 , 0.8, 0.95} and three different values of p ? {128, 256, 512}, we find the phase
transition threshold for dirty model, LASSO and ?1 /?? regularizer. We consider the point where
the probability of success in recovery of signed support exceeds 50% as the phase transition threshold. We find this point by interpolation on the closest two points. Fig 2 shows that phase transition
threshold for dirty model is always lower than the phase transition for LASSO and ?1 /?? regularizer.
4.2 Handwritten Digits Dataset
We use the handwritten digit dataset [1], containing features of handwritten numerals (0-9) extracted
from a collection of Dutch utility maps. This dataset has been used by a number of papers [17, 6]
as a reliable dataset for handwritten recognition algorithms. There are thus r = 10 tasks, and each
handwritten sample consists of p = 649 features.
Table 1 shows the results of our analysis for different sizes n of the training set . We measure the
classification error for each digit to get the 10-vector of errors. Then, we find the average error and
the variance of the error vector to show how the error is distributed over all tasks. We compare our
method with ?1 /?? reguralizer method and LASSO. Again, in all methods, parameters are chosen
via cross-validation.
For our method we separate out the B and S matrices that our method finds, so as to illustrate how
many features it identifies as ?shared? and how many as ?non-shared?. For the other methods we
just report the straight row and support numbers, since they do not make such a separation.
Acknowledgements
We acknowledge support from NSF grant IIS-101842, and NSF CAREER program, Grant 0954059.
8
References
[1] A. Asuncion and D.J. Newman.
UCI Machine Learning Repository,
http://www.ics.uci.edu/ mlearn/MLRepository.html.
University
of
California, School of Information and Computer Science, Irvine, CA, 2007.
[2] F. Bach. Consistency of the group lasso and multiple kernel learning. Journal of Machine
Learning Research, 9:1179?1225, 2008.
[3] R. Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 24(4):118?121, 2007.
[4] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
[5] C.Zhang and J.Huang. Model selection consistency of the lasso selection in high-dimensional
linear regression. Annals of Statistics, 36:1567?1594, 2008.
[6] X. He and P. Niyogi. Locality preserving projections. In NIPS, 2003.
[7] K. Lounici, A. B. Tsybakov, M. Pontil, and S. A. van de Geer. Taking advantage of sparsity in
multi-task learning. In 22nd Conference On Learning Theory (COLT), 2009.
[8] S. Negahban and M. J. Wainwright. Joint support recovery under high-dimensional scaling:
Benefits and perils of ?1,? -regularization. In Advances in Neural Information Processing
Systems (NIPS), 2008.
[9] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and
high-dimensional scaling. In ICML, 2010.
[10] G. Obozinski, M. J. Wainwright, and M. I. Jordan. Support union recovery in high-dimensional
multivariate regression. Annals of Statistics, 2010.
[11] P. Ravikumar, H. Liu, J. Lafferty, and L. Wasserman. Sparse additive models. Journal of the
Royal Statistical Society, Series B.
[12] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using
?1 -regularized logistic regression. Annals of Statistics, 2009.
[13] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization. In Allerton Conference, Allerton House, Illinois,
2007.
[14] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, 58(1):267?288, 1996.
[15] J. A. Tropp, A. C. Gilbert, and M. J. Strauss. Algorithms for simultaneous sparse approximation. Signal Processing, Special issue on ?Sparse approximations in signal and image
processing?, 86:572?602, 2006.
[16] B. Turlach, W.N. Venables, and S.J. Wright. Simultaneous variable selection. Techno- metrics,
27:349?363, 2005.
[17] M. van Breukelen, R.P.W. Duin, D.M.J. Tax, and J.E. den Hartog. Handwritten digit recognition by combined classifiers. Kybernetika, 34(4):381?386, 1998.
[18] M. J. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using
?1 -constrained quadratic programming (lasso). IEEE Transactions on Information Theory, 55:
2183?2202, 2009.
9
| 4125 |@word multitask:2 repository:1 norm:11 stronger:1 nd:1 turlach:1 cleanly:1 km:2 simulation:2 r:2 covariance:3 boundedness:2 liu:1 series:2 disparity:1 denoting:1 interestingly:1 outperforms:5 ksk1:1 recovered:1 yet:1 written:1 realistic:2 additive:1 plot:1 caveat:2 location:1 allerton:2 zhang:1 five:2 c2:6 specialize:1 consists:1 theoretically:2 indeed:6 behavior:2 multi:6 decomposed:1 automatically:1 encouraging:2 becomes:1 provided:2 estimating:2 underlying:2 notation:1 bounded:4 compressive:1 caution:2 supplemental:1 finding:1 kybernetika:1 guarantee:4 quantitative:1 every:1 exactly:3 classifier:1 uk:13 whatever:1 control:3 grant:2 enjoy:1 arguably:1 before:1 positive:3 engineering:1 aging:1 consequence:1 analyzing:1 establishing:1 incoherence:3 interpolation:1 might:4 signed:13 studied:1 specifying:1 range:3 statistically:2 fazel:1 unique:2 union:3 block:20 digit:5 pontil:1 empirical:3 projection:1 pre:1 get:1 convenience:1 selection:5 context:3 www:1 gilbert:1 deterministic:5 demonstrated:1 imposed:2 map:1 independently:1 convex:2 focused:1 recovery:14 wasserman:1 estimator:6 deriving:1 regularize:1 nuclear:1 annals:3 target:1 suppose:6 magazine:1 programming:1 us:2 techno:1 element:4 recognition:2 predicts:1 ising:1 observed:1 capture:1 gmin:2 pradeepr:1 rescaled:4 ran:1 complexity:1 covariates:1 depend:2 solving:1 ali:1 kmj:2 max1:1 f2:2 joint:1 differently:1 regularizer:13 describe:1 newman:1 quite:1 larger:1 widely:1 solve:1 say:1 otherwise:3 compressed:1 statistic:3 niyogi:1 jointly:1 itself:1 noisy:1 advantage:1 eigenvalue:3 analytical:1 relevant:9 uci:2 tax:1 optimum:1 guaranteeing:1 illustrate:1 develop:1 school:1 strong:1 recovering:4 c:2 predicted:1 cmin:8 material:1 noticeably:1 numeral:1 require:9 fix:1 f1:2 strictly:5 extension:1 hold:3 practically:1 blocksparse:3 ic:1 wright:1 exp:4 vary:2 a2:1 estimation:6 superposition:2 utexas:4 venables:1 successfully:1 tool:1 hope:1 minimization:1 always:4 gaussian:10 shrinkage:1 validated:1 focus:4 notational:1 rank:4 dependent:1 entire:2 a0:2 interested:4 arg:1 classification:6 html:1 colt:1 denoted:1 issue:1 constrained:1 special:2 ruan:2 field:1 having:1 identical:2 icml:1 nearly:1 sanghavi:2 report:1 modern:2 randomly:2 simultaneously:2 maxj:2 phase:10 delicate:2 attempt:1 interest:1 highly:1 investigate:2 entailed:1 bracket:2 pradeep:1 extreme:1 regularizers:2 accurate:1 encourage:2 explosion:2 plotted:1 theoretical:1 handed:1 column:7 instance:5 earlier:1 caruana:1 entry:8 successful:3 synthetic:3 combined:1 recht:1 negahban:5 again:3 xuk:10 containing:2 huang:1 possibly:1 worse:2 rescaling:2 supp:15 potential:1 parrilo:1 de:1 sec:3 coefficient:2 satisfy:2 explicitly:3 depends:2 vi:2 lot:1 start:1 recover:1 asuncion:1 accuracy:1 variance:7 largely:1 ensemble:1 peril:1 weak:1 handwritten:6 finer:1 straight:1 mlearn:1 simultaneous:6 maxc:2 suffers:1 sharing:7 definition:1 against:2 failure:3 recovers:1 handwriting:1 gain:3 irvine:1 dataset:6 ask:2 organized:1 actually:2 focusing:2 follow:1 response:4 lounici:1 just:3 hand:1 tropp:1 overlapping:1 logistic:1 grows:1 requiring:1 true:10 verify:1 ization:1 regularization:18 analytically:1 hence:1 mlrepository:1 bmin:2 performs:3 l1:4 ranging:1 image:1 common:7 empirically:2 endpoint:1 discussed:2 extend:1 he:1 elementwise:9 sujay:1 consistency:2 inclusion:3 illinois:1 closest:1 multivariate:1 recent:4 exclusion:1 certain:5 success:10 seen:1 minimum:2 preserving:1 impose:2 signal:4 ii:1 multiple:13 full:2 exceeds:1 match:5 adapt:1 cross:3 collate:1 bach:1 ravikumar:3 a1:2 controlled:1 converging:2 regression:19 basic:1 metric:1 dutch:1 kernel:2 tailored:1 represent:1 sponding:1 normalization:1 c1:9 addition:2 remarkably:1 else:1 biased:2 rest:1 sr:3 leveraging:2 lafferty:2 jordan:1 alij:1 structural:3 near:4 leverage:3 vital:3 enough:1 variety:1 xj:3 lasso:40 idea:2 texas:4 whether:3 utility:1 penalty:4 remark:1 useful:3 tsybakov:1 generate:1 http:1 outperform:2 exist:1 nsf:2 sign:10 tibshirani:1 group:1 threshold:10 drawn:1 changing:1 clean:2 merely:1 fraction:2 sum:2 everywhere:1 baraniuk:1 almost:1 decide:1 separation:1 appendix:1 scaling:12 submatrix:2 bound:5 pay:1 guaranteed:4 quadratic:1 duin:1 regressed:1 aspect:1 min:7 structured:2 developing:1 according:1 across:6 smaller:1 increasingly:1 den:1 pr:6 equation:1 agree:1 dmax:5 detectable:1 count:2 know:1 apply:2 away:3 rp:3 original:1 assumes:1 dirty:29 include:1 top:1 graphical:2 ncmin:3 k1:2 establish:1 society:2 question:4 quantity:1 occurs:1 dependence:1 jalali:1 separate:2 kbk1:1 mail:2 extent:5 toward:1 index:1 providing:1 ratio:1 setup:2 mostly:1 statement:1 design:15 unknown:3 perform:6 allowing:2 observation:7 datasets:1 acknowledge:1 maxk:2 defining:1 precise:3 rn:5 stack:1 sharp:2 required:2 c3:4 c4:2 california:1 nip:2 able:2 regime:5 sparsity:9 program:3 max:4 reliable:1 royal:2 wainwright:9 analogue:1 overlap:8 natural:1 regularized:5 linf:4 identifies:1 axis:1 chao:1 faced:1 acknowledgement:1 interesting:2 worrisome:1 versus:1 validation:3 sufficient:4 consistent:1 verification:2 share:2 heavy:1 austin:3 row:28 neat:1 fall:3 taking:2 correspondingly:1 absolute:1 sparse:26 distributed:1 van:2 boundary:2 dimension:1 curve:1 transition:13 world:3 benefit:1 collection:1 far:2 transaction:1 xi:1 search:1 quantifies:1 table:2 learn:1 mj:3 ca:1 career:1 main:4 linearly:1 motivation:1 noise:4 fig:5 fails:1 house:1 third:2 theorem:18 specific:2 sensing:2 concern:1 a3:2 exists:1 consist:1 false:4 strauss:1 gap:3 suited:1 locality:1 explore:1 corresponds:2 satisfies:3 extracted:1 obozinski:1 succeed:1 asutin:1 goal:1 shared:13 experimentally:1 change:1 included:1 except:3 total:1 geer:1 experimental:1 succeeds:2 uneven:1 support:37 arises:1 |
3,452 | 4,126 | A novel family of non-parametric cumulative based
divergences for point processes
Sohan Seth
University of Florida
Il ?Memming? Park
University of Texas at Austin
Mulugeta Semework
SUNY Downstate Medical Center
Austin J. Brockmeier
University of Florida
John Choi, Joseph T. Francis
SUNY Downstate Medical Center & NYU-Poly
Jos?e C. Pr??ncipe
University of Florida
Abstract
Hypothesis testing on point processes has several applications such as model fitting, plasticity detection, and non-stationarity detection. Standard tools for hypothesis testing include tests on mean firing rate and time varying rate function.
However, these statistics do not fully describe a point process, and therefore, the
conclusions drawn by these tests can be misleading. In this paper, we introduce
a family of non-parametric divergence measures for hypothesis testing. A divergence measure compares the full probability structure and, therefore, leads to a
more robust test of hypothesis. We extend the traditional Kolmogorov?Smirnov
and Cram?er?von-Mises tests to the space of spike trains via stratification, and
show that these statistics can be consistently estimated from data without any free
parameter. We demonstrate an application of the proposed divergences as a cost
function to find optimally matched point processes.
1
Introduction
Neurons communicate mostly through noisy sequences of action potentials, also known as spike
trains. A point process captures the stochastic properties of such sequences of events [1]. Many
neuroscience problems such as model fitting (goodness-of-fit), plasticity detection, change point
detection, non-stationarity detection, and neural code analysis can be formulated as statistical inference on point processes [2, 3]. To avoid the complication of dealing with spike train observations,
neuroscientists often use summarizing statistics such as mean firing rate to compare two point processes. However, this approach implicitly assumes a model for the underlying point process, and
therefore, the choice of the summarizing statistic fundamentally restricts the validity of the inference
procedure.
One alternative
to mean firing rate is to use the distance between the inhomogeneous rate functions,
R
i.e.
|?1 (t) ? ?2 (t)| dt, as a test statistic, which is sensitive to the temporal fluctuation of the
means of the point processes. In general the rate function does not fully specify a point process,
and therefore, ambiguity occurs when two distinct point processes have the same rate function.
Although physiologically meaningful change is often accompanied by the change in rate, there has
been evidence that the higher order statistics can change without a corresponding change of rate [4,
5]. Therefore, statistical tools that capture higher order statistics, such as divergences, can improve
the state-of-the-art hypothesis testing framework for spike train observations, and may encourage
new scientific discoveries.
1
In this paper, we present a novel family of divergence measures between two point processes. Unlike firing rate function based measures, a divergence measure is zero if and only if the two point
processes are identical. Applying a divergence measure for hypothesis testing is, therefore, more
appropriate in a statistical sense. We show that the proposed measures can be estimated from
data without any assumption on the underlying probability structure. However, a distribution-free
(non-parametric) approach often suffers from having free parameters, e.g. choice of kernel in nonparametric density estimation, and these free parameters often need to be chosen using computationally expensive methods such as cross validation [6]. We show that the proposed measures can
be consistently estimated in a parameter free manner, making them particularly useful in practice.
One of the difficulties of dealing with continuous-time point process is the lack of well structured
space on which the corresponding probability laws can be described. In this paper we follow a rather
unconventional approach for describing the point process by a direct sum of Euclidean spaces of
varying dimensionality, and show that the proposed divergence measures can be expressed in terms
of cumulative distribution functions (CDFs) in these disjoint spaces. To be specific, we represent
the point process by the probability of having a finite number of spikes and the probability of spike
times given that number of spikes, and since these time values are reals, we can represent them in
a Euclidean space using a CDF. We follow this particular approach since, first, CDFs can be easily
estimated consistently using empirical CDFs without any free parameter, and second, standard tests
on CDFs such as Kolmogorov?Smirnov (K-S) test [7] and Cram?er?von-Mises (C-M) test [8] are
well studied in the literature. Our work extends the conventional K-S test and C-M test on the real
line to the space of spike trains.
The rest of the paper is organized as follows; in section 2 we introduce the measure space where
the point process is defined as probability measures, in section 3 and section 4 we introduce the
extended K-S and C-M divergences, and derive their respective estimators. Here we also prove the
consistency of the proposed estimators. In section 5, we compare various point process statistics in
a hypothesis testing framework. In section 6 we show an application of the proposed measures in
selecting the optimal stimulus parameter. In section 7, we conclude the paper with some relevant
discussion and future work guidelines.
2
Basic point process
We define a point process to be a probability measure over all possible spike trains. Let ? be the
set of all finite spike trains, that is, each ? ? ? can be represented by a finite set of action potential
timings ? = {t1 ? t2 ? . . . ? tn } ? Rn where n is the number of spikes. Let ?0 , ?1 , ? ? ? denote
the partitions of ? such that ?n contains
all possible spike trains with exactly n events (spikes),
S?
hence ?n = Rn . Note that ? = n=0 ?n is a disjoint union, and that ?0 has only one element
representing the empty spike train (no action potential). See Figure 1 for an illustration.
Define a ?-algebra on ? S
by the ?-algebra generated by the union of Borel sets defined on the Eu?
clidean spaces; F = ? ( n=0 B (?n )). Note that any measurable set A ? F can be partitioned
?
into {An = A ? ?n }n=0 , such that each An is measurable in corresponding measurable space
(?n , B (?n )). Here A denotes a collection of spike trains involving varying number of action potentials and corresponding action potential timings, whereas An denotes a subset of these spike
trains involving only n action potentials each.
A (finite) point process is defined as a probability measure P on the measurable space (?, F) [1].
Let P and Q be two probability measures on (?, F), then we are interested in finding the divergence d(P, Q) between P and Q, where a divergence measure is characterized by d(P, Q) ? 0 and
d(P, Q) = 0 ?? P = Q.
3
Extended K-S divergence
A Kolmogorov-Smirnov (K-S) type divergence between P and Q can be derived from the L1 distance between the probability measures, following the equivalent representation,
Z
d |P ? Q| ? sup |P (A) ? Q(A)| .
(1)
d1 (P, Q) =
A?F
?
2
Inhomogeneous Poisson Firing
0
2
3
4
5
6
8
time
Figure 1: (Left) Illustration of how the point process space is stratified. (Right) Example of spike
trains stratified by their respective spike count.
Since (1) is difficult and perhaps impossible to estimate directly without a model, our strategy is to
use the stratified spaces (?0 , ?1 , . . .) defined in the previous section, and take the supremum only in
the corresponding conditioned probability measures. Let Fi = F ? ?i := {F ? ?i |F ? F}. Since
?i Fi ? F,
X
X
sup |P (?n )P (A|?n ) ? Q(?n )Q(A|?n )| .
sup |P (A) ? Q(A)| =
d1 (P, Q) ?
n?N
A?Fn
n?N
A?Fn
Since each ?n is a Euclidean space, we can induce the traditional K-S test statistic by further reducing the search space to F?n = {?i (??, ti ]|t = (t1 , . . . , tn ) ? Rn }. This results in the following
inequality,
(n)
(n)
(2)
sup |P (A) ? Q(A)| ? sup |P (A) ? Q(A)| = sup FP (t) ? FQ (t) ,
t?Rn
?n
A?F
A?Fn
(n)
where FP (t) = P [T1 ? t1 ? . . . ? Tn ? tn ] is the cumulative distribution function (CDF)
corresponding to the probability measure P in ?n . Hence, we define the K-S divergence as
X
(n)
(n)
dKS (P, Q) =
sup P (?n )FP (t) ? Q(?n )FQ (t) .
(3)
n?N
t?Rn
N
Q
P
Given a finite number of samples X = {xi }N
i=1 and Y = {yj }j=1 from P and Q respectively, we
have the following estimator for equation (3).
X
(n)
? n )F? (n) (t)
sup P? (?n )F?P (t) ? Q(?
d?KS (P, Q) =
Q
n?N
=
X
n?N
t?Rn
sup
t?Xn ?Yn
?
(n)
? n )F? (n) (t) ,
P (?n )F?P (t) ? Q(?
Q
(4)
where Xn = X ? ?n , and P? and F?P are the empirical probability and empirical CDF, respectively.
Notice that we only search the supremum over the locations of the realizations Xn ? Yn and not
(n)
? n )F? (n) (t) only changes
the whole Rn , since the empirical CDF difference P? (?n )F?P (t) ? Q(?
Q
values at those locations.
Theorem 1 (dKS is a divergence).
d1 (P, Q) ? dKS (P, Q) ? 0
dKS (P, Q) = 0 ?? P = Q
3
(5)
(6)
Proof. The first property and the ? proof for the second property are trivial. From the definition
(n)
(n)
of dKS and properties of CDF, dKS (P, Q) = 0 implies that P (?n ) = Q(?n ) and FP = FQ
for all n ? N. Given probability measures for each (?n , Fn ) denoted as Pn and Qn , there exist
corresponding unique extended measures P and Q for (?, F) such that their restrictions to (?n , Fn )
coincide with Pn and Qn , hence P = Q.
Theorem 2 (Consistency of K-S divergence estimator). As the sample size approaches infinity,
a.u.
(7)
dKS ? d?KS ??? 0
P
P
P
Proof. Note that | sup ? ? sup ?| ?
|sup ? ? sup ?|. Due to the triangle inequality of the
supremum norm,
? n )F? (n) (t)
sup P (?n )F (n) (t) ? Q(?n )F (n) (t) ? sup P? (?n )F? (n) (t) ? Q(?
P
Q
P
Q
t?Rn
t?Rn
(n)
(n)
(n)
? n )F? (n) (t) .
? sup P (?n )FP (t) ? Q(?n )FQ (t) ? P? (?n )F?P (t) ? Q(?
Q
t?Rn
Again, using the triangle inequality we can show the following:
(n)
(n)
(n)
? n )F? (n) (t)
P (?n )FP (t) ? Q(?n )FQ (t) ? P? (?n )F?P (t) ? Q(?
Q
(n)
(n)
(n)
(n)
? n )F? (t)
? P (?n )FP (t) ? Q(?n )FQ (t) ? P? (?n )F?P (t) + Q(?
Q
(n)
(n)
(n)
(n)
= P (?n )FP (t) ? P (?n )F?P (t) ? Q(?n )FQ (t) + Q(?n )F?Q (t)
(n)
(n)
? n )F? (n) (t) ? Q(?n )F? (n) (t)
+P (?n )F?P (t) ? P? (?n )F?P (t) + Q(?
Q
Q
(n)
(n)
(n)
(n)
?P (?n ) FP (t) ? F?P (t) + Q(?n ) FQ (t) ? F?Q (t)
(n)
(n)
? n ) .
+ F?P (t) P (?n ) ? P? (?n ) + F?Q (t) Q(?n ) ? Q(?
a.s.
? ??? P, Q.
Then the theorem follows from the Glivenko-Cantelli theorem, and P? , Q
Notice that the inequality in (2) can be made stricter by considering the supremum over not just the
product of the segments (??, ti ] but over the all 2n ? 1 possible products of the segments (??, ti ]
and [ti , ?) in n dimensions [7]. However, the latter approach is computationally more expensive,
and therefore, in this paper we only explore the former approach.
4
Extended C-M divergence
We can extend equation (3) to derive a Cram?er?von-Mises (C-M) type divergence for point processes. Let ? = P + Q/2, then P, Q are absolutely continuous with respect to ?. Note that,
(n)
(n)
FP , FQ ? L2 (?n , ?|n ) where |n denotes the restriction on ?n , i.e. the CDFs are L2 integrable,
since they are bounded. Analogous to the relation between K-S test and C-M test, we would like to
use the integrated squared deviation statistics in place of the maximal deviation statistic. By integrating over the probability measure ? instead of the supremum operation, and using L2 instead of
L? distance, we define
2
XZ
(n)
(n)
P (?n )FP (t) ? Q(?n )FQ (t) d?|n (t).
dCM (P, Q) =
(8)
n?N
Rn
This can be seen as a direct extension of the C-M criterion. The corresponding estimator can be
derived using the strong law of large numbers,
"
2
X 1 X
(n) (n)
?
? n )F? (n) (x(n) )
dCM (P, Q) =
P? (?n )F?P (xi ) ? Q(?
i
Q
2 i
n?N
#
2
1 X ?
(n) (n)
(n) (n)
?
?
?
P (?n )FP (yi ) ? Q(?n )FQ (yi )
.
(9)
+
2 i
4
Theorem 3 (dCM is a divergence). For P and Q with square integrable CDFs,
dCM (P, Q) ? 0
dCM (P, Q) = 0 ?? P = Q.
(10)
(11)
Proof. Similar to theorem 1.
Theorem 4 (Consistency of C-M divergence estimator). As the sample size approaches infinity,
a.u.
(12)
dCM ? d?CM ??? 0
Proof. Similar to (7), we find an upper bound and show that the bound uniformly converges to
(n)
(n)
zero. To simplify the notation, we define gn (x) = P (?n )FP (x) ? Q(?n )FQ (x), and g?n (x) =
a.u.
(n)
? n )F? (n) (x(n) ). Note that g?n ?
P? (?n )F?P (x(n) ) ? Q(?
?? g by the Glivenko-Cantelli theorem and
Q
a.s.
?
P ??? P by the strong law of large numbers.
1 X Z
XZ
XX
XX
2
2
2
2
?
gn dP |n +
gn dQ|n ?
g?n (xi ) ?
g?n (yi )
dCM ? dCM =
2
n?N
n?N
n?N i
n?N i
Z
Z
Z
Z
X
? n
=
gn2 dP |n ? g?n2 dP? |n + gn2 dQ|n ? g?n2 dQ|
n?N
Z
Z
Z
X Z
2
2
2
2
? n
gn dP |n ? g?n dP? |n + gn dQ|n ? g?n dQ|
?
n?N
? = P ?(yi ) are the corresponding empirical measures. Without loss
?(xi ) and Q
i
R
R
of generality, we only find the bound on gn2 dP |n ? g?n2 dP? |n , then the rest is bounded similarly
for Q.
Z
Z
Z
Z
Z
Z
gn2 dP |n ? g?n2 dP? |n = gn2 dP |n ? g?n2 dP |n + g?n2 dP |n ? g?n2 dP? |n
Z
Z
?
gn2 ? g?n2 dP |n ? g?n2 d P |n ? P? |n
where P? =
P
i
Applying Glivenko-Cantelli theorem and strong law of large numbers, these two terms converges
since g?n2 is bounded. Hence, we show that the C-M test estimator is consistent.
5
Results
We present a set of two-sample problems and apply various statistics to perform hypothesis testing. As a baseline measure, we consider the widely used Wilcoxon rank-sum test (or equivalently, the Mann-Whitney U test) on the count distribution (e.g. [9]), which is a non-parametric
median Rtest for the total number of action potentials, and the integrated squared deviation statistic
2
?L2 = (?1 (t) ? ?2 (t)) dt, where ?(t) is estimated by smoothing spike timing with a Gaussian
kernel, evaluated at a uniform grid at least an order of magnitude smaller than the standard deviation
of the kernel. We report the performance of the test with varying kernel sizes.
All tests are quantified by the power of the test given a significance threshold (type-I error) at 0.05.
The null hypothesis distribution is empirically computed by either generating independent samples
or by permuting the data to create at least 1000 values.
5.1
Stationary renewal processes
Renewal process is a widely used point process model that compensates the deviation from Poisson
process [10]. We consider two stationary renewal processes with gamma interval distributions. Since
the mean rate of the two processes are the same, the rate function statistic and Wilcoxon test does
5
H
0
20
15
1
10
0.9
5
0.8
0.7
0
0.2
0.4
0.6
0.8
1
0.6
power
0
H1
20
0.5
K?S
C?M
? 10 ms
0.4
15
0.3
10
0.2
?
100 ms
5
0.1
?
1 ms
0
N
L2
0
0
0.2
0.4
0.6
time (sec)
0.8
1
L2
L2
10
14 18
25 33 45 61
Number of samples
82
111 150
Figure 2: Gamma distributed renewal process with shape parameter ? = 3 (H0 ) and ? = 0.5 (H1 ).
The mean number of action potential is fixed to 10. (Left) Spike trains from the null and alternate
hypothesis. (Right) Comparison of the power of each method. The error bars are standard deviation
over 20 Monte Carlo runs.
not yield consistent result, while the proposed measures obtain high power with a small number of
samples. The C-M test is more powerful than K-S in this case; this can be interpreted by the fact
that the difference in the cumulative is not concentrated but spread out over time because of the
stationarity.
5.2
Precisely timed spike trains
When the same stimulation is presented to a neuronal system, the observed spike trains sometimes show a highly repeatable spatio-temporal pattern at the millisecond time scale. Recently
these precisely timed spike trains (PTST) are abundantly reported both in vivo and in vitro preparations [11, 12, 13]. Despite being highly reproducible, different forms of trial-to-trial variability
have also been observed [14]. It is crucial to understand this variability since for a system to utilize
PTSTs as a temporal code, it should presumably be robust to its variability structure, and possibly
learn to reduce it [15].
A precisely timed spike train in an interval is modeled by L number of probability density and
probability pairs {(fi (t), pi )}L
i=1 . Each fi (t) corresponds to the temporal jitter, and pi corresponds
to the probability of generating the spike. Each realization of the PTST model
P produces at most
L spikes. The equi-intensity Poisson process has the rate function ?(t) = i pi fi (t). We test if
the methods can differentiate between the PTST (H0 ) and equi-intensity Poisson process (H1 ) for
L = 1, 2, 3, 4 (see Figure 3 for the L = 4 case). Note that L determines the maximum dimension for
the PTST. fi (t) were equal variance Gaussian distribution on a grid sampled from a uniform random
variable, and pi = 0.9.
As shown in Figure 3, only the proposed methods perform well. Since the rate function profile is
identical for both models, the rate function statistic ?L2 fails to differentiate. The Wilcoxon test does
work for intermediate dimensions, however its performance is highly variable and unpredictable. In
contrast to the previous example, the K-S test is consistently better than the C-M statistic in this
problem.
6
Optimal stimulation parameter selection
Given a set of point processes, we can find the one which is closest to a target point process in terms
of the proposed divergence. Here we use this method on a real dataset obtained from the somatosensory system of an anesthetized rat (see supplement for procedure). Specifically, we address finding
6
H0
H1
20
20
15
15
10
10
5
5
0
0
0.2
0.4
0.6
time (ms)
0.8
0
1
0
0.2
0.4
0.6
time (ms)
0.8
1
1
1
0.9
0.9
0.8
dCM L=1
0.8
0.7
dCM L=2
0.7
d
CM
L=3
0.6
dCM L=4
0.5
power
power
0.6
dKS L=1
0.5
0.4
dKS L=2
0.4
0.3
d
L=3
0.3
0.2
dKS L=4
0.2
KS
N L=1
N L=2
N L=3
N L=4
0.1
0.1
0
0
19
37
71
136
Number of samples
261
500
19
37
71
136
number of samples
261
500
Figure 3: [Top] Precisely timed spike train model (H0 ) versus equi-intensity Poisson process (H1 ).
Spike trains from the null and alternate hypothesis for L = 4. [Bottom] Comparison of the power
of each method for L = 1, 2, 3, 4 on precisely timed spike train model (H0 ) versus equi-intensity
Poisson process (H1 ). (Left) Power comparison for methods except for N . The rate statistic ?L2 are
not labeled, since they are not able to detect the difference. (Right) Wilcoxon test on the number of
action potentials. The error bars are standard deviation over 10 Monte Carlo runs.
optimal electrical stimulation settings to produce cortical spiking patterns similar to those observed
with tactile stimuli.
The target process has 240 realizations elicited by tactile stimulation of the ventral side of the first
digit with a mechanical tactor. We seek the closest out of 19 processes elicited by electrical stimulation in the thalamus. Each process has 140 realizations that correspond to a particular setting
of electrical stimulation. The settings correspond to combinations of duration and amplitude for
biphasic current injection on two adjacent channels in the thalamus. The channel of interest and the
stimulating channels were chosen to have significant response to tactile stimulation.
The results from applying the C-M, K-S, and ?L2 measures between the tactile responses and the sets
from each electrical stimulation setting are shown Figure 4. The overall trend among the measures
is consistent, but the location of the minima does not coincide for ?L2 .
7
Conclusion
In this paper, we have proposed two novel measures of divergence between point processes. The
proposed measures have been derived from the basic probability law of a point process and we have
shown that these measures can be efficiently estimated consistently from data. Using divergences for
statistical inference transcends first and second order statistics, and enables distribution-free spike
train analysis.
P
2
2
The time complexity of both methods is O
where
n n NP (n)NQ (n) + NP (n) + NQ (n)
NP (n) is the number of spike trains from P that has n spikes. In practice this is often faster than
7
1
K?S
C?M
? L2
0.8
Average spikes per bin
0.6
0.4
0.2
0
#15 (100uA,125?s)
#17 (100uA,175?s)
Trials sorted by count
then 1st spike
Tactile
0.4
0.3
0.2
0.1
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Parameter index (sorted by duration then amplitude)
0.02
0.04 0
0.02
Time (s)
0.04 0
0.02
0.04
Figure 4: (Left) Dissimilarity/divergences from tactile response across parameter sets. The values
of each measure are shifted and scaled to be in the range of 0 to 1. ?L2 uses 2.5 ms bins with
no smoothing. (Right) Responses from the tactile response (left), stimulation settings selected by
?L2 (center), and the realizations selected by K-S and C-M (right). Top row shows the spike trains
stratified into number of spikes and then sorted by spike times. Bottom row shows the average
response binned at 2.5 ms; the variance is shown as a thin green line.
the binned rateP
function estimation which has time complexity O(BN ) where B is the number of
bins and N = n n(NP (n) + NQ (n)) is the total number of spikes in all the samples. Although,
we have observed that the statistic based on the L2 distance between the rate functions often outperforms the proposed method, this approach involves the search for the smoothing kernel size and bin
size which can make the process slow and prohibitive. In addition, it brings the danger of multiple
testing, since some smoothing kernel sizes may pickup spurious patterns that are only fluctuations
due to finite samples size.
A similar approach based on stratification has also been addressed in [16], where the authors have
discussed the problem of estimating Hellinger distance between two point processes. Although
conceptually similar, the advantage of the proposed approach is that it is parameter free, whereas the
other approach requires selecting appropriate kernels and the corresponding kernel sizes for each
Euclidean partitions. However, a stratification-based approach suffers in estimation when the count
distributions of the point processes under consideration are flat, since in this situation the spike
train realizations tend to exist in separate Euclidean partitions, and given a finite set of realizations,
it becomes difficult to populate each partition sufficiently. Therefore, other methods should be
investigated that allow two spike trains to interact irrespective of their spike counts. Other possible
approaches include the kernel-based divergence measures as proposed in [17], since the measures
can be applied to any abstract space. However, it requires desinging an appropriate strictly positive
definite kernel on the space of spike trains.
In this literature, we have presented the divergences in the context of spike trains generated by
neurons. However, the proposed methods can be used for general point processes, and can be
applied to other areas. Although we have proved consistency of the proposed measures, further
statistical analysis such as small sample power analysis, rate of convergence, and asymptotic properties would be interesting to address. A MATLAB implementation is freely available on the web
(http://code.google.com/p/iocane) with BSD-license.
Acknowledgment
This work is partially funded by NSF Grant ECCS-0856441 and DARPA Contract N66001-10-C2008.
8
References
[1] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer,
1988.
[2] D. H. Johnson, C. M. Gruner, K. Baggerly, and C. Seshagiri. Information-theoretic analysis of
neural coding. Journal of Computational Neuroscience, 10(1):47?69, 2001.
[3] J. D. Victor. Spike train metrics. Current Opinion in Neurobiology, 15:585?592, 2005.
[4] A. Kuhn, A. Aertsen, and S. Rotter. Higher-order statistics of input ensembles and the response
of simple model neurons. Neural Computation, 15(1):67?101, 2003.
[5] F. Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek. Spikes: exploring the
neural code. MIT Press, Cambridge, MA, USA, 1999.
[6] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall,
New York, 1986.
[7] G. Fasano and A. Franceschini. A multidimensional version of the Kolmogorov?Smirnov test.
Royal Astronomical Society, Monthly Notices, 225:155?170, 1987.
[8] T. W. Anderson. On the distribution of the two-sample Cram?er?von-Mises criterion. Annals
of Mathematical Statistics, 33(3):1148?1159, 1962.
[9] A. Kepecs, N. Uchida, H. A. Zariwala, and Z. F. Mainen. Neural correlates, computation and
behavioural impact of decision confidence. Nature, 455(7210):227?231, 2008.
[10] M. P. P. Nawrot, C. Boucsein, V. R. Molina, A. Riehle, A. Aertsen, and S. Rotter. Measurement
of variability dynamics in cortical spike trains. Journal of Neuroscience Methods, 169(2):374?
390, 2008.
[11] P. Reinagel and R. Clay Reid. Precise firing events are conserved across neurons. Journal of
Neuroscience, 22(16):6837?6841, 2002.
[12] M. R. DeWeese, M. Wehr, and A. M. Zador. Binary spiking in auditory cortex. Journal of
Neuroscience, 23(21):7940?7949, 2003.
[13] R. S. Johansson and I. Birznieks. First spikes in ensembles of human tactile afferents code
complex spatial fingertip events. Nature Neuroscience, 7(2):170?177, 2004.
[14] P. Tiesinga, J. M. Fellous, and T. J. Sejnowski. Regulation of spike timing in visual cortical
circuits. Nature Reviews Neuroscience, 9:97?107, 2008.
[15] S. M. Bohte and M. C. Mozer. Reducing the variability of neural responses: A computational
theory of spike-timing-dependent plasticity. Neural Computation, 19(2):371?403, 2007.
[16] I. Park and J. C. Pr??ncipe. Quantification of inter-trial non-stationarity in spike trains from
periodically stimulated neural cultures. In Proceedings of IEEE International Conference on
Acoustics, Speech and Signal Processing, 2010. Special session on Multivariate Analysis of
Brain Signals: Methods and Applications.
[17] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?olkopf, and A. J. Smola. A kernel method
for the two-sample problem. CoRR, abs/0805.2368, 2008.
9
| 4126 |@word trial:4 version:1 norm:1 johansson:1 smirnov:4 seek:1 bn:1 contains:1 selecting:2 mainen:1 outperforms:1 abundantly:1 current:2 com:1 vere:1 john:1 fn:5 periodically:1 partition:4 plasticity:3 shape:1 enables:1 reproducible:1 stationary:2 selected:2 prohibitive:1 nq:3 equi:4 complication:1 location:3 mathematical:1 direct:2 prove:1 fitting:2 manner:1 hellinger:1 introduce:3 inter:1 xz:2 brain:1 unpredictable:1 considering:1 ua:2 becomes:1 xx:2 matched:1 underlying:2 bounded:3 notation:1 estimating:1 null:3 circuit:1 cm:2 interpreted:1 finding:2 biphasic:1 temporal:4 multidimensional:1 ti:4 stricter:1 exactly:1 scaled:1 medical:2 grant:1 yn:2 reid:1 t1:4 positive:1 ecc:1 timing:5 despite:1 firing:6 fluctuation:2 studied:1 k:3 quantified:1 cdfs:6 stratified:4 range:1 steveninck:1 unique:1 acknowledgment:1 testing:8 yj:1 practice:2 union:2 definite:1 silverman:1 digit:1 procedure:2 danger:1 area:1 empirical:5 confidence:1 induce:1 integrating:1 cram:4 selection:1 context:1 applying:3 impossible:1 restriction:2 conventional:1 measurable:4 equivalent:1 center:3 zador:1 duration:2 estimator:7 reinagel:1 transcends:1 fasano:1 rieke:1 analogous:1 annals:1 target:2 us:1 hypothesis:11 element:1 trend:1 expensive:2 particularly:1 labeled:1 observed:4 bottom:2 electrical:4 capture:2 eu:1 mozer:1 complexity:2 dynamic:1 segment:2 algebra:2 triangle:2 easily:1 seth:1 darpa:1 various:2 represented:1 kolmogorov:4 train:30 distinct:1 describe:1 monte:2 sejnowski:1 glivenko:3 h0:5 widely:2 compensates:1 statistic:22 noisy:1 differentiate:2 sequence:2 advantage:1 product:2 maximal:1 relevant:1 realization:7 olkopf:1 convergence:1 empty:1 produce:2 generating:2 converges:2 derive:2 strong:3 involves:1 implies:1 somatosensory:1 rasch:1 kuhn:1 inhomogeneous:2 stochastic:1 human:1 opinion:1 mann:1 bin:4 extension:1 strictly:1 exploring:1 sufficiently:1 hall:1 presumably:1 ventral:1 estimation:4 sensitive:1 create:1 tool:2 mit:1 gaussian:2 rather:1 avoid:1 pn:2 varying:4 derived:3 consistently:5 rank:1 fq:12 cantelli:3 contrast:1 baseline:1 summarizing:2 sense:1 detect:1 inference:3 dependent:1 integrated:2 spurious:1 relation:1 interested:1 overall:1 among:1 denoted:1 art:1 smoothing:4 renewal:4 spatial:1 special:1 equal:1 ptst:4 having:2 stratification:3 identical:2 chapman:1 park:2 jones:1 thin:1 future:1 t2:1 stimulus:2 fundamentally:1 simplify:1 report:1 np:4 gamma:2 divergence:27 ab:1 detection:5 stationarity:4 neuroscientist:1 interest:1 highly:3 fingertip:1 permuting:1 encourage:1 culture:1 respective:2 euclidean:5 timed:5 gn:5 goodness:1 whitney:1 cost:1 deviation:7 subset:1 uniform:2 tiesinga:1 johnson:1 optimally:1 reported:1 st:1 density:3 international:1 borgwardt:1 contract:1 jos:1 von:4 ambiguity:1 again:1 squared:2 possibly:1 potential:9 kepecs:1 de:1 accompanied:1 sec:1 coding:1 afferent:1 h1:6 franceschini:1 francis:1 sup:16 elicited:2 vivo:1 memming:1 il:1 square:1 variance:2 efficiently:1 ensemble:2 yield:1 correspond:2 conceptually:1 carlo:2 suffers:2 definition:1 proof:5 mi:4 sampled:1 auditory:1 dataset:1 proved:1 astronomical:1 dimensionality:1 organized:1 clay:1 amplitude:2 higher:3 dt:2 follow:2 specify:1 response:8 evaluated:1 generality:1 anderson:1 just:1 smola:1 ncipe:2 web:1 lack:1 google:1 brings:1 perhaps:1 scientific:1 usa:1 validity:1 former:1 hence:4 bohte:1 riehle:1 brockmeier:1 adjacent:1 rat:1 criterion:2 m:7 theoretic:1 demonstrate:1 tn:4 l1:1 consideration:1 novel:3 fi:6 recently:1 stimulation:9 spiking:2 empirically:1 vitro:1 extend:2 discussed:1 significant:1 monthly:1 measurement:1 cambridge:1 consistency:4 grid:2 similarly:1 session:1 funded:1 cortex:1 wilcoxon:4 closest:2 multivariate:1 inequality:4 binary:1 rotter:2 yi:4 victor:1 integrable:2 seen:1 minimum:1 molina:1 conserved:1 freely:1 signal:2 full:1 multiple:1 thalamus:2 gretton:1 faster:1 characterized:1 cross:1 impact:1 involving:2 basic:2 metric:1 poisson:6 kernel:11 represent:2 sometimes:1 whereas:2 addition:1 interval:2 addressed:1 median:1 crucial:1 sch:1 rest:2 unlike:1 tend:1 intermediate:1 fit:1 reduce:1 texas:1 tactile:8 speech:1 york:1 action:9 matlab:1 useful:1 nonparametric:1 concentrated:1 http:1 exist:2 restricts:1 nsf:1 millisecond:1 notice:3 shifted:1 estimated:6 neuroscience:7 disjoint:2 per:1 threshold:1 suny:2 drawn:1 license:1 deweese:1 utilize:1 n66001:1 sum:2 run:2 powerful:1 communicate:1 jitter:1 extends:1 family:3 place:1 decision:1 bound:3 binned:2 infinity:2 precisely:5 flat:1 uchida:1 injection:1 structured:1 alternate:2 combination:1 bsd:1 smaller:1 across:2 partitioned:1 joseph:1 making:1 pr:2 computationally:2 equation:2 behavioural:1 describing:1 count:5 unconventional:1 boucsein:1 available:1 operation:1 apply:1 appropriate:3 alternative:1 florida:3 denotes:3 assumes:1 include:2 top:2 warland:1 society:1 spike:50 occurs:1 parametric:4 strategy:1 traditional:2 aertsen:2 bialek:1 dp:14 distance:5 separate:1 trivial:1 code:5 modeled:1 index:1 illustration:2 equivalently:1 difficult:2 mostly:1 regulation:1 implementation:1 guideline:1 perform:2 gn2:6 upper:1 neuron:4 observation:2 finite:7 pickup:1 situation:1 extended:4 variability:5 neurobiology:1 precise:1 rn:11 intensity:4 pair:1 mechanical:1 acoustic:1 address:2 able:1 bar:2 nawrot:1 pattern:3 fp:13 green:1 royal:1 power:9 event:4 difficulty:1 quantification:1 representing:1 improve:1 misleading:1 irrespective:1 review:1 literature:2 discovery:1 l2:15 asymptotic:1 law:5 fully:2 loss:1 interesting:1 versus:2 validation:1 consistent:3 dq:5 pi:4 austin:2 row:2 free:8 populate:1 side:1 allow:1 understand:1 anesthetized:1 distributed:1 van:1 dimension:3 xn:3 cortical:3 cumulative:4 qn:2 author:1 collection:1 made:1 coincide:2 dks:10 correlate:1 implicitly:1 supremum:5 dealing:2 conclude:1 spatio:1 xi:4 physiologically:1 continuous:2 search:3 stimulated:1 nature:3 learn:1 channel:3 robust:2 ruyter:1 interact:1 investigated:1 poly:1 complex:1 wehr:1 significance:1 spread:1 whole:1 profile:1 n2:10 neuronal:1 borel:1 slow:1 fails:1 daley:1 rtest:1 theorem:9 choi:1 fellous:1 specific:1 repeatable:1 er:4 nyu:1 evidence:1 corr:1 supplement:1 magnitude:1 dissimilarity:1 conditioned:1 explore:1 visual:1 expressed:1 partially:1 springer:1 corresponds:2 determines:1 cdf:5 stimulating:1 dcm:11 ma:1 sorted:3 formulated:1 clidean:1 change:6 specifically:1 except:1 reducing:2 uniformly:1 total:2 meaningful:1 latter:1 absolutely:1 preparation:1 d1:3 |
3,453 | 4,127 | Learning to localise sounds with spiking neural
networks
Romain Brette
D?epartment d?Etudes Cognitive
Ecole Normale Sup?erieure
29 Rue d?Ulm
Paris 75005, France
[email protected]
Dan F. M. Goodman
D?epartment d?Etudes Cognitive
Ecole Normale Sup?erieure
29 Rue d?Ulm
Paris 75005, France
[email protected]
Abstract
To localise the source of a sound, we use location-specific properties of the signals
received at the two ears caused by the asymmetric filtering of the original sound by
our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs
change throughout an organism?s lifetime, during development for example, and
so the required neural circuitry cannot be entirely hardwired. Since HRTFs are
not directly accessible from perceptual experience, they can only be inferred from
filtered sounds. We present a spiking neural network model of sound localisation
based on extracting location-specific synchrony patterns, and a simple supervised
algorithm to learn the mapping between synchrony patterns and locations from a
set of example sounds, with no previous knowledge of HRTFs. After learning, our
model was able to accurately localise new sounds in both azimuth and elevation,
including the difficult task of distinguishing sounds coming from the front and
back.
Keywords: Auditory Perception & Modeling (Primary); Computational Neural Models, Neuroscience, Supervised Learning (Secondary)
1
Introduction
For many animals, it is vital to be able to quickly locate the source of an unexpected sound, for
example to escape a predator or locate a prey. For humans, localisation cues are also used to isolate a
speaker in a noisy environment. Psychophysical studies have shown that source localisation relies on
a variety of acoustic cues such as interaural time and level differences (ITDs and ILDs) and spectral
cues (Blauert 1997). These cues are highly dependent on the geometry of the head, body and pinnae,
and can change significantly during an animal?s lifetime, notably during its development but also in
mature animals (which are known to be able to adapt to these changes, for example Hofman et al.
1998). Previous neural models addressed the mechanisms of cue extraction, in particular neural
mechanisms underlying ITD sensitivity, using simplified binaural stimuli such as tones or noise
bursts with artificially induced ITDs (Colburn, 1973; Reed and Blum, 1990; Gerstner et al., 1996;
Harper and McAlpine, 2004; Zhou et al., 2005; Liu et al., 2008), but did not address the problem of
learning to localise natural sounds in realistic acoustical environments.
Since the physical laws of sound propagation are linear, the sound S produced by a source is received
at any point x of an acoustical environment as a linearly filtered version Fx ? S (linear convolution),
where the filter is specific of the location x of the listener, the location of the source and the acoustical
environment (ground, wall, objects, etc.). For binaural hearing, the acoustical environment includes
the head, body and pinnae, and the sounds received at the two ears are FL ? S and FR ? S, where
(FL , FR ) is a pair of location-specific filters. Because the two sounds originate from the same
1
signal, the binaural stimulus has a specific structure, which should result in synchrony patterns in
the encoding neurons. Specifically, we modelled the response of monaural neurons by a linear
filtering of the sound followed by a spiking nonlinearity. Two neurons A and B responding to two
different sides (left and right), with receptive fields NA and NB , transform the signals NA ? FL ? S
and NB ? FR ? S into spike trains. Thus, synchrony between A and B occurs whenever NA ? FL =
NB ? FR , i.e., for a specific set of filter pairs (FL , FR ). Thus, in our model, sounds presented
at a given location induce specific synchrony patterns, which then activate a specific assembly of
postsynaptic neurons (coincidence detection neurons), in a way that is independent of the source
signal (see Goodman and Brette, in press). Learning a new location consists in assigning a label to
the activated assembly, using a teacher signal (for example visual input).
We used measured human HRTFs to generate binaural signals at different source locations from a
set of various sounds. These signals were used to train the model and we tested the localisation
accuracy with new sounds. After learning, the model was able to accurately locate unknown sounds
in both azimuth and elevation.
2
2.1
Methods
Virtual acoustics
Sound sources used were: broadband white noise; recordings of instruments and voices from the
RWC Music Database (http://staff.aist.go.jp/m.goto/RWC-MDB/); and recordings
of vowel-consonant-vowel sounds (Lorenzi et al., 1999). All sounds were of 1 second duration and
were presented at 80 dB SPL. Sounds were filtered by head-related impulse responses (HRIRs)
from the IRCAM LISTEN HRTF Database (http://recherche.ircam.fr/equipes/
salles/listen/index.html). This database includes 187 approximately evenly spaced locations at all azimuths in 15 degree increments (except for high elevations) and elevations from -45
to 90 degrees in 15 degree increments. HRIRs from this and other databases do not provide sufficiently accurate timing information at frequencies below around 150Hz, and so subsequent cochlear
filtering was restricted to frequencies above this point.
2.2
Mathematical principle
Consider two sets of neurons which respond monaurally to sounds from the left ear and from the
right ear by filtering sounds through a linear filter N (modeling their receptive field, corresponding
to cochlear and neural transformations on the pathway between the ear and the neuron) followed by
spiking. Each neuron has a different filter. Spiking is modeled by an integrate-and-fire description
or some other spiking model. Consider two neurons A and B which respond to sounds from the
left and right ear, respectively. When a sound S is produced by a source at a given location, it
arrives at the two ears as the binaural signal (FL ? S, FR ? S) (convolution), where (FL , FR ) is
the location-specific pair of acoustical filters. The filtered inputs to the two spiking models A and
B are then NA ? FL ? S and NB ? FR ? S. These will be identical for any sound S whenever
NA ? FL = NB ? FR , implying that the two neurons fire synchronously. For each location indicated
by its filter pair (FL , FR ), we define the synchrony pattern as the set of binaural pairs of neurons
(A, B) such that NA ? FL = NB ? FR . This pattern is location-specific and independent of the
source signal S. Therefore, the identity of the synchrony pattern induced by a binaural stimulus
indicates the location of the source. Learning consists in assigning a synchrony pattern induced by
a sound to the location of the source.
To have a better idea of these synchrony patterns, consider a pair of filters (FL? , FR? ) that corresponds
to a particular location x (azimuth, elevation, distance, and possibly also position of the listener in
the acoustical environment), and suppose neuron A has receptive field NA = FR? and neuron B has
receptive field NB = FL? . Then neurons A and B fire in synchrony whenever FR? ? FL = FL? ? FR ,
in particular when FL = FL? and FR = FR? , that is, at location x (since convolution is commutative).
More generally, if U is a band-pass filter and the receptive fields of neurons A and B are U ? FR? and
U ? FL? , respectively, then the neurons fire synchronously at location x. The same property applies if
a nonlinearity (e.g. compression) is applied after filtering. If the bandwidth of U is very small, then
U ? FR? is essentially the filter U followed by a delay and gain. Therefore, to represent all possible
2
locations in pairs of neuron filters, we consider that the set of neural transformations N is a bank of
band-pass filters followed by a set of delays and gains.
To decode synchrony patterns, we define a set of binaural neurons which receive input spike trains
from monaural neurons on both sides (two inputs per neuron). A binaural neuron responds preferentially when its two inputs are synchronous, so that synchrony patterns are mapped to assemblies
of binaural neurons. Each location-specific assembly is the set of binaural neurons for which the
input neurons fire synchronously at that location. This is conceptually similar to the Jeffress model
(Jeffress, 1948), where a neuron is maximally activated when acoustical and axonal delays match,
and related models (Lindemann, 1986; Gaik, 1993). However, the Jeffress model is restricted to azimuth estimation and it is difficult to implement it directly with neuron models because ILDs always
co-occur with ITDs and disturb spike-timing.
2.3
Implementation with spiking neuron models
HRTF
Cochlear
filtering
Neural
filtering
Coincidence
detection
Neural
filtering
Cochlear
filtering
L
HRTF
R
?i
?i
L
Fj
R
(30?, 15?) (45?, 15?) (90?, 15?)
Fj
Figure 1: Implementation of the model. The source signal arrives at the two ears after acoustical filtering by HRTFs. The two monaural signals are filtered by a set of gammatone filters ?i with central
frequencies between 150 Hz and 5 kHz (cochlear filtering). In each band (3 bands shown, between
dashed lines), various gains and delays are applied to the signal (neural filtering FjL and FjR ) and
spiking neuron models transform the resulting signals into spike trains, which converge from each
side on a coincidence detector neuron (same neuron model). The neural assembly corresponding to
a particular location is the set of coincidence detector neurons for which their input neurons fire in
synchrony at that location (one pair for each frequency channel).
The overall structure and architecture of the model is illustrated in Figure 1. All programming
was done in the Python programming language, using the ?Brian? spiking neural network simulator
package (Goodman and Brette, 2009). Simulations were performed on Intel i7 Core processors. The
largest model involved approximately one million neurons.
Cochlear and neural filtering. Head-filtered sounds were passed through a bank of fourth-order
gammatone filters with center frequencies distributed on the ERB scale (central frequencies from
150 Hz to 5 kHz), modeling cochlear filtering (Glasberg and Moore, 1990). Linear filtering was
carried out in parallel with a custom algorithm designed for large filterbanks (around 30,000 filters
in our simulations). Gains and delays were then applied, with delays at most 1 ms and gains at most
?10 dB.
Neuron model. The filtered sounds were half-wave rectified and compressed by a 1/3 power law
I = k([x]+ )1/3 (where x is the sound pressure in pascals). The resulting signal was used as an
input current to a leaky integrate-and-fire neuron with noise. The membrane potential V evolves
according to the equation:
?
dV
?m
= V0 ? V + I(t) + ? 2?m ?(t)
dt
3
Table 1: Neuron model parameters
Parameter
Value
Description
Vr
V0
Vt
trefrac
-60 mV
-60 mV
-50 mV
5 ms
0 ms
1 mV
1 ms
5 mV
0.2 V/Pa1/3
Reset potential
Resting potential
Threshold potential
Absolute refractory period
(for binaural neurons)
Standard deviation of membrane potential due to noise
Membrane time constant
Synaptic weight for coincidence detectors
Acoustic scaling constant
?
?m
W
k
where ?m is the membrane time constant, V0 is the resting potential, ?(t) is Gaussian noise (such
that h?(t), ?(s)i = ?(t?s)) and ? is the standard deviation of the membrane potential in the absence
of spikes. When V crosses the threshold Vt a spike is emitted and V is reset to Vr and held there
for an absolute refractory period trefrac . These neurons make synaptic connections with binaural
neurons in a second layer (two presynaptic neurons for each binaural neuron). These coincidence
detector neurons are leaky integrate-and-fire neurons with the same equations but their inputs are
synaptic. Spikes arriving at these neurons cause an instantaneous increase W in V (where W is the
synaptic weight). Parameter values are given in Table 1.
Estimating location from neural activation. Each location is assigned an assembly of coincidence
detector neurons, one in each frequency channel. When a sound is presented to the model, the total
firing rate of all neurons in each assembly is computed. The estimated location is the one assigned to
the maximally activated assembly. Figure 2 shows the activation of all location-specific assemblies
in an example where a sound was presented to the model, after learning.
Computing assemblies from HRTFs. In the hardwired model, we defined the location-specific assemblies from the knowledge of HRTFs (the learning algorithm is explained in section 2.4). For a
given location (filter pair (FL , FR )) and frequency channel (gammatone filter G), we choose the binaural neuron for which the the gains (gL , gR ) and delays (dL , dR ) of the two presynaptic monaural
neurons minimize the RMS difference
sZ
?=
(gL (G ? FL )(t ? dL ) ? gR (G ? FR )(t ? dR ))2 dt,
that is, the RMS difference between the inputs of the two neurons for a sound impulse at that location. We also impose max(gL , gR ) = 1 and max(dL , dR ) = 0 (so that one delay is null and the
other is positive). The RMS difference is minimized
R when the delays correspond to the maximum of
the cross-correlation between L and R, C(s) =R (G?FL )(t)?(G?FR )(t+s)dt, so that C(dR ?dL )
is the maximum, and gR /gL = C(dR ? dL )/ R(t)2 dt.
2.4
Learning
In the hardwired model, the knowledge of the full set of HRTFs is used to estimate source location.
But HRTFs are never directly accessible to the auditory system, because they are always convolved
with the source signal. They cannot be genetically wired either, because they depend on the geometry of the head (which changes during development). In our model, when HRTFs are not explicitly
known, location-specific assemblies are learned by presenting unknown sounds at different locations
to the model, where there is one coincidence detector neuron for each choice of frequency, relative
delay and relative gain. Relative delays were uniformly chosen between ?0.8 ms and 0.8 ms, and
relative gains between ?8 dB and 8 dB uniformly on a dB scale. In total 69 relative delays were
chosen and 61 relative gains. With 80 frequency channels, this gives a total of roughly 106 neurons
in the model. When a sound is presented at a given location, we define the assembly for this location
by picking the maximally activated neuron in each frequency channel, as would be expected from a
supervised Hebbian learning process with a teacher signal (e.g. visual cues). For practical reasons,
4
Elevation (deg)
80
60
40
20
0
20
40
150 100 50 0 50 100 150
Azimuth (deg)
Figure 2: Activation of all location-specific assemblies in response to a sound coming from a particular location indicated by a black +. The white x shows the model estimate (maximally activated
assembly). The mapping from assemblies to locations were learned from a set of sounds.
we did not implement this supervised learning with spiking models, but supervised learning with
spiking neurons has been described in several previous studies (Song and Abbott, 2001; Davison
and Frgnac, 2006; Witten et al., 2008).
3
Results
When the model is ?hardwired? using the explicit knowledge of HRTFs, it can accurately localise
a wide range of sounds (Figure 3A-C): for the maximum number of channels we tested (80), we
obtained an average error of between 2 and 8 degrees for azimuth and 5 to 20 degrees for elevation
(depending on sound type), and with more channels this error is likely to further decrease, as it
did not appear to have reached an asymptote at 80 channels. Performance is better for sounds with
broader spectrums, as each channel provides additional information. The model was also able to
distinguish between sounds coming from the left and right (with an accuracy of almost 100%), and
performed well for the more difficult tasks of distinguishing between front and back (80-85%) and
between up and down (70-90%).
Figure 3D-F show the results using the learned best delays and gains, using the full training data
set (seven sounds presented at each location, each of one second duration) and different test sounds.
Performance is comparable to the hardwired model. Average azimuth errors for 80 channels are 4-8
degrees, and elevation errors are 10-27 degrees. Distinguishing left and right is done with close to
100% accuracy, front and back with 75-85% and up and down with 65-90%. Figure 4 shows how
the localisation accuracy improves with more training data. With only a single sound of one second
duration at each location, the performance is already very good. Increasing the training to three seconds of training data at each location improves the accuracy, but including further training data does
not appear to lead to any significant improvement. Although it is close, the performance does not
seem to converge to that of the hardwired model, which might be due to a limited sampling of delays
and gains (69 relative delays and 61 relative gains), or perhaps to the presence of physiological noise
in our models (Goodman and Brette, in press).
Figure 5 shows the properties of neurons in a location-specific assembly: interaural delay (Figure
5A) and interaural gain difference (Figure 5B) for each frequency channel. For this location, the
assemblies in the hardwired model and with learning were very similar, which indicates that the
learning procedure was indeed able to catch the binaural cues associated with that location. The
distributions of delays and gain differences were similar in the hardwired model and with learning.
In the hardwired model, these interaural delays and gains correspond to the ITDs and ILDs in fine
frequency bands. To each location corresponds a specific frequency-dependent pattern of ITDs and
ILDs, which is informative of both azimuth and elevation. In particular, these patterns are different
when the location is reversed between front and back (not shown), and this difference is exploited
by the model to distinguish between these two cases.
5
A
B
C
D
E
F
Figure 3: Performance of the hard-wired model (A-C) and with learning (D-F). A, D, Mean error in
azimuth estimates as a function of the number of frequency channels (i.e., assembly size) for white
noise (red), vowel-consonant-vowel (blue) and musical instruments (green). Front-back reversed
locations were considered as having the same azimuth. The channels were selected at random between 150 Hz and 5 kHz and results were averaged over many random choices. B, E, Mean error in
elevation estimates. C, F, Categorization performance discriminating left and right (solid), front and
back (dashed) and up and down (dotted).
A
B
Figure 4: Performance improvement with training (80 frequency channels). A, Average estimation
error in azimuth (blue) and elevation (green) as a function of the number of sounds presented at
each location during learning (each sound lasts 1 second). The error bars represent 95% confidence
intervals. The dashed lines indicate the estimation error in the hardwired model (when HRTFs are
explicitly known). B, Categorization performance vs. number of sounds per location for discriminating left and right (green), front and back (blue) and up and down (red).
4
Discussion
The sound produced by a source propagates to the ears according to linear laws. Thus the ears
receive two differently filtered versions of the same signal, which induce a location-specific structure
6
A
B
Figure 5: Location-specific assembly in the hardwired model and with learning. A, Preferred interaural delay vs. preferred frequency for neurons in an assembly corresponding to one particular
location, in the hardwired model (white circles) and with learning (black circles). The colored
background shows the distribution of preferred delays in all neurons in the hardwired model. B,
Interaural gain difference vs. preferred frequency for the same assemblies.
in the binaural stimulus. When binaural signals are transformed by a heterogeneous population of
neurons, this structure is mapped to synchrony patterns, which are location-specific. We designed a
simple spiking neuron model which exploits this property to estimate the location of sound sources
in a way that is independent of the source signal. In the model, each location activates a specific
assembly. We showed that the mapping between assemblies and locations can be directly learned
in a supervised way from the presentation of a set of sounds at different locations, with no previous
knowledge of the HRTFs or the sounds. With 80 frequency channels, we found that 1 second of
training data per location was enough to estimate the azimuth of a new sound with mean error 6
degrees and the elevation with error 18 degrees.
Humans can learn to localise sound sources when their acoustical cues change, for example when
molds are inserted into their ears (Hofman et al., 1998; Zahorik et al., 2006). Learning a new
mapping can take a long time (several weeks in the first study), which is consistent with the idea
that the new mapping is learned from exposure to sounds from known locations. Interestingly,
the previous mapping is instantly recovered when the ear molds are removed, meaning that the
representations of the two acoustical environments do not interfere. This is consistent with our
model, in which two acoustical environments would be represented by two possibly overlapping
sets of neural assemblies.
In our model, we assumed that the receptive field of monaural neurons can be modeled as a band-pass
filter with various gains and delays. Differences in input gains could simply arise from differences in
membrane resistance, or in the number and strength of the synapses made by auditory nerve fibers.
Delays could arise from many causes: axonal delays (either presynaptic or postsynaptic), cochlear
delays (Joris et al., 2006), inhibitory delays (Brand et al., 2002). The distribution of best delays
of the binaural neurons in our model reflect the distribution of ITDs in the acoustical environment.
This contradicts the observation in many species that the best delays are always smaller than half the
characteristic period, i.e., they are within the ?-limit (Joris and Yin, 2007). However, we checked
that the model performed almost equally well with this constraint (Goodman and Brette, in press),
which is not very surprising since best delays above the ?-limit are mostly redundant. In small
mammals (guinea pigs, gerbils), it has been shown that the best phases of binaural neurons in the
MSO and IC are in fact even more constrained, since they are scattered around ??/4, in constrast
with birds (e.g. barn owl) where the best phases are continuously distributed (Wagner et al., 2007).
However, in larger mammals such as cats, best IPDs in the MSO are more continuously distributed
(Yin and Chan, 1990), with a larger proportion close to 0 (Figure 18 in Yin and Chan, 1990). It
has not been measured in humans, but the same optimal coding theory that predicts the discrete
distribution of phases in small mammals predicts that best delays should be continuously distributed
above 400 Hz (80% of the frequency channels in our model). In addition, psychophysical results also
7
imply that humans can estimate both the azimuth and elevation of low-pass filtered sound sources
(< 3 kHz) (Algazi et al., 2001), which only contain binaural cues. This is contradictory with the twochannel model (best delays at ??/4) and in agreement with ours (including the fact that elevation
could only be estimated away from the median plane in these experiments).
Our model is conceptually similar to a recent signal processing method (with no neural implementation) to localize sound sources in the horizontal plane (Macdonald, 2008), where coincidence detection is replaced by Pearson correlation between the two transformed monaural broadband signals
(no filterbank). However, that method requires explicit knowledge of the HRTFs, so that it cannot
be directly learned from natural exposure to sounds.
The HRTFs used in our virtual acoustic environment were recorded at a constant distance, so that we
could only test the model performance in estimating the azimuth and elevation of a sound source.
However, in principle, it should also be able to estimate the distance when the source is close.
It should also apply equally well to non-anechoic environments, because our model only relies
on the linearity of sound propagation. However, a difficult task, which we have not addressed, is
to locate sounds in a new environment, because reflections would change the binaural cues and
therefore the location-specific assemblies. One possibility would be to isolate the direct sound from
the reflections, but this requires additional mechanisms, which probably underlie the precedence
effect (Litovsky et al., 1999).
References
Algazi, V. R., C. Avendano, and R. O. Duda (2001, March). Elevation localization and head-related transfer
function analysis at low frequencies. The Journal of the Acoustical Society of America 109(3), 1110?1122.
Brand, A., O. Behrend, T. Marquardt, D. McAlpine, and B. Grothe (2002). Precise inhibition is essential for
microsecond interaural time difference coding. Nature 417(6888), 543.
Colburn, H. S. (1973, December). Theory of binaural interaction based on auditory-nerve data. i. general
strategy and preliminary results on interaural discrimination. The Journal of the Acoustical Society of America 54(6), 1458?1470.
Davison, A. P. and Y. Frgnac (2006, May). Learning Cross-Modal spatial transformations through spike
Timing-Dependent plasticity. J. Neurosci. 26(21), 5604?5615.
Gaik, W. (1993, July). Combined evaluation of interaural time and intensity differences: Psychoacoustic results
and computer modeling. The Journal of the Acoustical Society of America 94(1), 98?110.
Gerstner, W., R. Kempter, J. L. van Hemmen, and H. Wagner (1996). A neuronal learning rule for submillisecond temporal coding. Nature 383(6595), 76.
Glasberg, B. R. and B. C. Moore (1990, August). Derivation of auditory filter shapes from notched-noise data.
Hearing Research 47(1-2), 103?138. PMID: 2228789.
Goodman, D. F. M. and R. Brette (2009). The Brian simulator. Frontiers in Neuroscience 3(2), 192?197.
Goodman, D. F. M. and R. Brette (in press). Spike-timing-based computation in sound localization. PLoS
Comp. Biol..
Harper, N. S. and D. McAlpine (2004). Optimal neural population coding of an auditory spatial cue. Nature 430(7000), 682?686.
Hofman, P. M., J. G. V. Riswick, and A. J. V. Opstal (1998). Relearning sound localization with new ears. Nat
Neurosci 1(5), 417?421.
Jeffress, L. A. (1948, February). A place theory of sound localization. Journal of Comparative and Physiological Psychology 41(1), 35?9. PMID: 18904764.
Joris, P. and T. C. T. Yin (2007, February). A matter of time: internal delays in binaural processing. Trends in
Neurosciences 30(2), 70?8. PMID: 17188761.
Joris, P. X., B. V. de Sande, D. H. Louage, and M. van der Heijden (2006). Binaural and cochlear disparities.
Proceedings of the National Academy of Sciences 103(34), 12917.
Lindemann, W. (1986, December). Extension of a binaural cross-correlation model by contralateral inhibition.
i. simulation of lateralization for stationary signals. The Journal of the Acoustical Society of America 80(6),
1608?1622.
Litovsky, R. Y., H. S. Colburn, W. A. Yost, and S. J. Guzman (1999, October). The precedence effect. The
Journal of the Acoustical Society of America 106(4), 1633?1654.
Liu, J., H. Erwin, S. Wermter, and M. Elsaid (2008). A biologically inspired spiking neural network for sound
localisation by the inferior colliculus. In Artificial Neural Networks - ICANN 2008, pp. 396?405.
8
Lorenzi, C., F. Berthommier, F. Apoux, and N. Bacri (1999, October). Effects of envelope expansion on speech
recognition. Hearing Research 136(1-2), 131?138.
Macdonald, J. A. (2008, June). A localization algorithm based on head-related transfer functions. The Journal
of the Acoustical Society of America 123(6), 4290?4296. PMID: 18537380.
Reed, M. C. and J. J. Blum (1990, September). A model for the computation and encoding of azimuthal
information by the lateral superior olive. The Journal of the Acoustical Society of America 88(3), 1442?
1453. PMID: 2229677.
Song, S. and L. F. Abbott (2001, October). Cortical development and remapping through spike TimingDependent plasticity. Neuron 32(2), 339?350.
Wagner, H., A. Asadollahi, P. Bremen, F. Endler, K. Vonderschen, and M. von Campenhausen (2007). Distribution of interaural time difference in the barn owl?s inferior colliculus in the low- and High-Frequency
ranges. J. Neurosci. 27(15), 4191?4200.
Witten, I. B., E. I. Knudsen, and H. Sompolinsky (2008, August). A hebbian learning rule mediates asymmetric
plasticity in aligning sensory representations. J Neurophysiol 100(2), 1067?1079.
Yin, T. C. and J. C. Chan (1990). Interaural time sensitivity in medial superior olive of cat. J Neurophysiol 64(2),
465?488.
Zahorik, P., P. Bangayan, V. Sundareswaran, K. Wang, and C. Tam (2006, July). Perceptual recalibration in
human sound localization: Learning to remediate front-back reversals. The Journal of the Acoustical Society
of America 120(1), 343?359.
Zhou, Y., L. H. Carney, and H. S. Colburn (2005, March). A model for interaural time difference sensitivity
in the medial superior olive: Interaction of excitatory and inhibitory synaptic inputs, channel dynamics, and
cellular morphology. J. Neurosci. 25(12), 3046?3058.
9
| 4127 |@word version:2 compression:1 proportion:1 duda:1 simulation:3 azimuthal:1 pressure:1 mammal:3 solid:1 liu:2 disparity:1 ecole:2 ours:1 interestingly:1 colburn:4 current:1 recovered:1 surprising:1 marquardt:1 activation:3 assigning:2 olive:3 realistic:1 subsequent:1 informative:1 plasticity:3 shape:1 asymptote:1 localise:6 designed:2 medial:2 v:3 implying:1 cue:11 half:2 selected:1 discrimination:1 tone:1 stationary:1 plane:2 core:1 recherche:1 filtered:9 colored:1 davison:2 provides:1 location:61 mathematical:1 burst:1 direct:1 consists:2 dan:2 interaural:12 pathway:1 notably:1 indeed:1 expected:1 roughly:1 simulator:2 morphology:1 inspired:1 increasing:1 estimating:2 underlying:1 linearity:1 remapping:1 null:1 hrirs:2 transformation:3 temporal:1 filterbank:1 underlie:1 appear:2 positive:1 timing:4 limit:2 encoding:2 firing:1 approximately:2 black:2 might:1 bird:1 co:1 limited:1 range:2 averaged:1 practical:1 sundareswaran:1 implement:2 procedure:1 significantly:1 confidence:1 induce:2 cannot:3 close:4 nb:7 center:1 go:1 exposure:2 duration:3 constrast:1 rule:2 population:2 fx:1 increment:2 suppose:1 ulm:2 decode:1 programming:2 distinguishing:3 agreement:1 romain:2 trend:1 pinna:3 recognition:1 asymmetric:2 predicts:2 database:4 inserted:1 coincidence:9 wang:1 sompolinsky:1 blauert:1 plo:1 decrease:1 removed:1 environment:12 dynamic:1 depend:1 hofman:3 localization:6 neurophysiol:2 binaural:26 differently:1 various:3 listener:2 represented:1 fiber:1 cat:2 america:8 train:4 derivation:1 pmid:5 activate:1 jeffress:4 artificial:1 pearson:1 litovsky:2 larger:2 compressed:1 transform:2 noisy:1 interaction:2 coming:3 reset:2 fr:25 anechoic:1 gammatone:3 academy:1 description:2 disturb:1 wired:2 categorization:2 comparative:1 object:1 depending:1 measured:2 keywords:1 received:3 indicate:1 filter:19 human:6 virtual:2 owl:2 notched:1 wall:1 preliminary:1 elevation:16 brian:2 precedence:2 frontier:1 extension:1 sufficiently:1 around:3 ground:1 itd:1 considered:1 ic:1 barn:2 mapping:6 week:1 circuitry:1 estimation:3 label:1 largest:1 activates:1 always:3 gaussian:1 normale:2 zhou:2 broader:1 june:1 improvement:2 indicates:2 dependent:3 brette:8 transformed:2 france:2 overall:1 html:1 pascal:1 development:4 animal:3 constrained:1 spatial:2 field:6 never:1 extraction:1 having:1 sampling:1 identical:1 minimized:1 guzman:1 stimulus:4 escape:1 fjr:1 national:1 replaced:1 geometry:2 phase:3 fire:8 vowel:4 detection:3 highly:1 localisation:6 possibility:1 custom:1 evaluation:1 arrives:2 activated:5 held:1 accurate:1 ircam:2 experience:1 circle:2 modeling:4 hearing:3 deviation:2 contralateral:1 delay:31 azimuth:15 front:8 gr:4 teacher:2 combined:1 sensitivity:3 discriminating:2 accessible:2 picking:1 quickly:1 continuously:3 na:7 von:1 central:2 recorded:1 ear:13 reflect:1 choose:1 possibly:2 dr:5 cognitive:2 tam:1 potential:7 de:1 coding:4 opstal:1 includes:2 matter:1 filterbanks:1 caused:1 mv:5 explicitly:2 performed:3 algazi:2 sup:2 reached:1 wave:1 red:2 parallel:1 predator:1 synchrony:14 minimize:1 accuracy:5 musical:1 characteristic:1 spaced:1 correspond:2 conceptually:2 modelled:1 accurately:3 produced:3 comp:1 rectified:1 processor:1 detector:6 synapsis:1 whenever:3 synaptic:5 checked:1 recalibration:1 frequency:22 involved:1 pp:1 associated:1 gain:18 auditory:6 knowledge:6 listen:2 improves:2 back:8 nerve:2 dt:4 supervised:6 response:3 maximally:4 modal:1 done:2 lifetime:2 correlation:3 horizontal:1 overlapping:1 propagation:2 interfere:1 indicated:2 lindemann:2 perhaps:1 impulse:2 fjl:1 effect:3 contain:1 assigned:2 lateralization:1 moore:2 aist:1 illustrated:1 white:4 during:5 inferior:2 speaker:1 timingdependent:1 m:6 presenting:1 reflection:2 fj:2 meaning:1 instantaneous:1 superior:3 mcalpine:3 witten:2 spiking:14 physical:1 khz:4 jp:1 refractory:2 million:1 berthommier:1 organism:1 resting:2 significant:1 erieure:2 nonlinearity:2 language:1 gerbil:1 v0:3 etc:1 inhibition:2 aligning:1 showed:1 chan:3 recent:1 sande:1 vt:2 der:1 exploited:1 additional:2 staff:1 impose:1 converge:2 period:3 redundant:1 signal:22 dashed:3 july:2 full:2 sound:68 mold:2 hebbian:2 match:1 adapt:1 cross:4 long:1 equally:2 heterogeneous:1 essentially:1 submillisecond:1 erwin:1 represent:2 receive:2 background:1 addition:1 fine:1 addressed:2 interval:1 median:1 source:23 goodman:8 envelope:1 itds:6 probably:1 hz:5 isolate:2 induced:3 mature:1 recording:2 goto:1 db:5 december:2 seem:1 emitted:1 extracting:1 axonal:2 presence:1 vital:1 enough:1 variety:1 psychology:1 architecture:1 bandwidth:1 idea:2 i7:1 synchronous:1 rms:3 passed:1 song:2 resistance:1 speech:1 cause:2 generally:1 band:6 generate:1 http:2 inhibitory:2 dotted:1 neuroscience:3 estimated:2 per:3 blue:3 instantly:1 discrete:1 erb:1 threshold:2 blum:2 localize:1 prey:1 abbott:2 ilds:4 colliculus:2 package:1 fourth:1 respond:2 place:1 throughout:1 pa1:1 spl:1 almost:2 scaling:1 comparable:1 wermter:1 entirely:1 fl:21 layer:1 followed:4 distinguish:2 strength:1 occur:1 constraint:1 according:2 march:2 membrane:6 smaller:1 postsynaptic:2 contradicts:1 evolves:1 biologically:1 dv:1 restricted:2 explained:1 equation:2 mechanism:3 instrument:2 reversal:1 apply:1 away:1 spectral:1 voice:1 convolved:1 original:1 responding:1 assembly:26 music:1 joris:4 exploit:1 february:2 society:8 psychophysical:2 already:1 spike:10 occurs:1 receptive:6 primary:1 glasberg:2 strategy:1 responds:1 september:1 distance:3 reversed:2 mapped:2 macdonald:2 lateral:1 evenly:1 acoustical:20 originate:1 cochlear:9 presynaptic:3 seven:1 mso:2 reason:1 cellular:1 index:1 reed:2 modeled:2 preferentially:1 difficult:4 mostly:1 october:3 implementation:3 unknown:2 etude:2 convolution:3 neuron:62 observation:1 knudsen:1 head:9 precise:1 locate:4 synchronously:3 monaural:6 august:2 intensity:1 inferred:1 pair:9 paris:2 required:1 connection:1 acoustic:4 learned:6 mediates:1 address:1 able:7 bar:1 below:1 pattern:14 perception:1 pig:1 genetically:1 including:3 max:2 green:3 power:1 natural:2 hardwired:13 hrtfs:16 imply:1 carried:1 hrtf:3 catch:1 python:1 relative:8 law:3 kempter:1 filtering:15 integrate:3 degree:9 consistent:2 propagates:1 principle:2 bank:2 bremen:1 excitatory:1 gl:4 last:1 arriving:1 guinea:1 side:3 wide:1 wagner:3 absolute:2 leaky:2 distributed:4 van:2 cortical:1 sensory:1 made:1 simplified:1 preferred:4 sz:1 deg:2 assumed:1 consonant:2 spectrum:1 table:2 learn:2 transfer:3 channel:17 nature:3 expansion:1 gerstner:2 artificially:1 rue:2 did:3 psychoacoustic:1 icann:1 linearly:1 neurosci:4 noise:8 arise:2 body:2 neuronal:1 broadband:2 intel:1 en:2 hemmen:1 scattered:1 vr:2 position:1 explicit:2 carney:1 perceptual:2 down:4 specific:22 physiological:2 dl:5 essential:1 nat:1 commutative:1 relearning:1 yin:5 simply:1 likely:1 visual:2 unexpected:1 applies:1 corresponds:2 relies:2 avendano:1 identity:1 presentation:1 microsecond:1 absence:1 change:6 hard:1 specifically:1 except:1 uniformly:2 contradictory:1 total:3 specie:1 secondary:1 pas:4 brand:2 internal:1 harper:2 yost:1 tested:2 biol:1 |
3,454 | 4,128 | Predictive Subspace Learning for Multi-view Data:
a Large Margin Approach
Ning Chen??
Jun Zhu?
Eric P. Xing?
?
[email protected], {ningchen,junzhu,epxing}@cs.cmu.edu
?
Dept. of CS & T, TNList Lab, State Key Lab of ITS, Tsinghua University, Beijing 100084 China
?
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 USA
?
Abstract
Learning from multi-view data is important in many applications, such as image
classification and annotation. In this paper, we present a large-margin learning
framework to discover a predictive latent subspace representation shared by multiple views. Our approach is based on an undirected latent space Markov network
that fulfills a weak conditional independence assumption that multi-view observations and response variables are independent given a set of latent variables. We
provide efficient inference and parameter estimation methods for the latent subspace model. Finally, we demonstrate the advantages of large-margin learning on
real video and web image data for discovering predictive latent representations
and improving the performance on image classification, annotation and retrieval.
1 Introduction
In many scientific and engineering applications, such as image annotation [28] and web-page classification [6], the available data usually come from diverse domains or are extracted from different
aspects, which will be referred to as views. Standard predictive methods, such as support vector
machines, are built with all the variables available, without taking into consideration the presence
of distinct views. These methods would sacrifice the predictive performance [7] and may also be
incapable of performing view-level analysis [12], such as predicting the tags for image annotation
and analyzing the underlying relationships amongst views. Different from the existing work that has
been done on exploring multi-view information to alleviate the difficult semi-supervised learning
[6, 12, 2, 14] and unsupervised clustering [8] problems, our goal is to develop a statistical framework
that learns a predictive subspace representation shared by multiple views when labels are provided
and perform view-level analysis, particularly view-level predictions.
To discover a subspace representation shared by multi-view data, the unsupervised canonical correlation analysis (CCA) [17] and its kernelized version [1] ignore the widely available supervised
information, such as image categories. Therefore, they could discover a subspace with weak predictive ability. The multi-view fisher discriminant analysis (FDA) [13] provides a supervised approach
to finding such a projected subspace. However, this deterministic approach cannot provide viewlevel predictions, such as image annotation; and it would also need a density estimator in order to
apply the information criterion [9] to detect view disagreement. In this paper, we consider a probabilistic approach to model multi-view data, which can perform both the response-level predictions
(e.g., image classification) and view-level predictions (e.g., image annotation).
Specifically, we propose a large-margin learning approach to discovering a predictive subspace representation for multi-view data. The approach is based on a generic multi-view latent space Markov
network (MN) that fulfills a weak conditional independence assumption that the data from different
views and the response variables are conditionally independent given a set of latent variables. This
conditional independence is much weaker than the typical assumption (e.g., in the seminal work of
1
co-training [6]) that multi-view data are conditionally independent given the very low dimensional
response variables [14]. Although directed Bayesian networks (BNs) (e.g., latent Dirichlet allocation
(LDA) [5] and probabilistic CCA [3]) can also be designed to fulfill the conditional independence,
the posterior inference can be hard because all the latent variables are coupled together given the
input variables [26]. Therefore, we ground our approach on the undirected MNs. Undirected latent
variable models have shown promising performance in many applications [26, 20]. In the multiview MN, conditioned on latent variables, each view defines a joint distribution similar to that in a
conditional random field (CRF) [18] and thus it can effectively extract latent topics from structured
data. For example, considering word ordering information could improve the quality of discovered
latent topics [23] compared to a method (e.g., LDA) solely based on the natural bag-of-word representation, and spatial relationship among regions in an image is also useful for computer vision
applications [15]. To learn the multi-view latent space MN, we develop a large-margin approach,
which jointly maximizes the data likelihood and minimizes the hinge-loss on training data. The
learning and inference problems are efficiently solved with a contrastive divergence method [25].
Finally, we concentrate on one special case of the large-margin mult-view MN and extensively evaluate it on real video and web image datasets for image classification, annotation and retrieval tasks.
Our results show that the large-margin approach can achieve significant improvements in terms of
prediction performance and discovered latent subspace representations.
The paper is structured as follows. Sec 2 and Sec 3 present the multi-view latent space MN and
its large-margin training. Sec 4 presents a special case. Sec 5 presents empirical results and Sec 6
concludes.
H1 ... HK
2 Multi-view Latent Space Markov Networks
The unsupervised two-view latent space Markov network is shown
in Fig. 1, which consists of two views of input data X := {Xn }
X1 X2 XN
Z1 Z2 ZM
and Z := {Zm } and a set of latent variables H := {Hk }. For
ease of presentation, we assume that the variables on each view Figure 1: Multi-view Markov
networks with K latent variables.
are connected via a linear-chain. Extensions to multiple views and
more complex structures on each view can be easily done, after we have presented the constructive
definition of the model distribution. The model is constructed based on an underlying conditional
independence assumption that given the latent variables H, the two views X and Z are independent.
Graphically, we can see that both the exponential family Harmonium (EFH) [26] and its extension
of dual-wing Harmonium (DWH) [28] are special cases of multi-view latent space MNs. Therefore,
it is not surprising to see that multi-view MNs inherit the widely advocated property of EFH that
the model distribution can be constructively defined based on local conditionals on each view.
Specifically, we first define marginal distributions of the data on each view and the latent variables.
For each view, we consider the first-order Markov network. By the random field theory, we have
p(x) = exp
nX
i
o
nX
o
?i> ?(xi , xi+1 ) ? A(?) , and p(z) = exp
?j> ?(zj , zj+1 ) ? B(?) ,
j
where ? and ? are feature functions, A and B are log partition functions. For latent variables H,
each component hk has an exponential family distribution and therefore the marginal distribution
is:
n
o
Y
Y
p(hk ) =
p(h) =
k
k
exp ?>
k ?(hk ) ? Ck (?k ) ,
where ?(hk ) is the feature vector of hk , Ck is another log-partition function.
Next, the joint model distribution is defined by combining the above components in the log-domain
and introducing additional terms that couple the random variables X, Z and H. Specifically, we have
p(x, z, h) ? exp
nX
X >
X >
?k ?(hk )
?j ?(zj , zj+1 )+
?i> ?(xi , xi+1 )+
j
iX
k
o
X
?(zj , zj+1 )> Ukj ?(hk ) .
?(xi , xi+1 )> Wik ?(hk )+
+
(1)
jk
ik
Then, we can directly
write the conditional
n
o distributions on each view with shifted parameters,
P ?>
? , where ??i = ?i + P Wik ?(hk );
? ?(xi , xi+1 )?A(?)
k
o
nPi i
P
>
k
?
?
?(z
,
z
)
?
B(?
?
)
,
where
?
?
=
?
+
p(z|h)=exp
j
j+1
j
j
j
k Uj ?(hk ); and
nj
o
Q
P
P
k
k
?>
?
?
p(h|x, z)= k exp ?
k ?(hk )?Ck (?k ) , where ?k = ?k +
i Wi ?(xi , xi+1 )+
j Uj ?(zj , zj+1 ).
p(x|h)=exp
2
We can see that conditioned on the latent variables, both p(x|h) and p(z|h) are defined in the
exponential form with a pairwise potential function, which is very similar to conditional random
fields [18]. Reversely, we can start with defining the local conditional distributions as above and
directly write the compatible joint distribution, which is of the log-linear form as in (1). We will use
? to denote all the parameters (?, ?, ?, W, U).
Since the latent variables are not directly connected, the complexity of inferring the posterior distribution of H is the same as in EFH when all the input data are observed, as reflected in the factorized
form of p(h|x, z). Therefore, multi-view latent space MNs do not increase the complexity on testing
if our task depends solely on the latent representation (i.e., expectation of H), such as information
retrieval [26], classification, clustering etc. However, the complexity of parameter estimation and
inferring the posterior distribution of each view (e.g., X) will be increased, depending on the structure on the view. For the simple case of linear-chain, the inference can be efficiently done with a
forward-backward message passing scheme [18]. For a general model structure, which may contain
many loops, approximate inference such as variational methods [22] is needed to perform the task.
We will provide more details when presenting the learning problem.
Up to now, we have sticken on unsupervised multi-view latent space MNs, which are of wide use
in discovering latent subspace representations shared by multi-view data. In this paper, however,
we are more interested in the supervised setting where each input sample is associated with a
supervised response variable, such as image categories. Accordingly, our goal is to discover a
predictive subspace by exploring the supervised information. The supervised multi-view latent
space MNs are defined similarly as above, but with an additional view of response variables Y .
Now, the conditional independence is: X, Z and Y are independent if H is given. As we have
stated, this assumption is much weaker than the typical conditional independence assumption that
X and Z are independent given Y . Based on the constructive definition, we only need to specify
the conditional distribution of Y given H. In principle, Y can be continuous or discrete. Here, we
consider the discrete case, where y ? {1, ? ? ? , T }, and define
p(y|h) = P
exp{V> f (h, y)}
,
>
0
y 0 exp{V f (h, y )}
(2)
where f (h, y) is the feature vector whose elements from (y ? 1)K + 1 to yK are those of h and all
others are 0. Accordingly, V is a stacking parameter vector of T sub-vectors Vy , of which each one
corresponds to a class label y. Then, the joint distribution p(x, z, h, y) has the same form as in Eq.
(1), but with an additional term of V> f (h, y) = Vy> h in the exponential.
We note that a supervised version of DWH, which will be denoted by TWH (i.e., triple wing Harmonium), was proposed in [29], and the parameter estimation was done by maximizing the joint data
likelihood. However, the resultant TWH model does not yield improved performance compared to
the naive method that combines an unsupervised DWH for discovering latent representations and
an SVM for classification. This observation further motivates us to develop a more discriminative
learning approach to exploring the supervised information for discovering predictive latent subspace
representations. As we shall see, integrating the large-margin principle into one objective function
for joint latent subspace model and prediction model learning can yield much better results, in terms
of prediction performance and predictiveness of discovered latent subspace representations.
3 Parameter Estimation: a Large Margin Approach
To learn the supervised multi-view latent space MNs, a natural method is the maximum likelihood
estimation (MLE), which has been widely used to train directed [24, 30] and undirected latent variable models [26, 20, 28, 29]. However, likelihood-based parameter estimation pays additional efforts
in defining a normalized probabilistic model as in Eq. (2), of which the normalization factor can
make the inference hard, especially in directed models [24]. Moreover, the standard MLE could result in non-conclusive results, as reported in [29] and verified in our experiments. These have been
motivating us to develop a more discriminative learning approach. An arguably more discriminative
way to learn a classification model is to directly estimate the decision boundary, which is the essential idea underlying the very successful large-margin classifiers (e.g., SVMs). Here, we integrate the
large-margin idea into the learning of supervised multi-view latent space MNs for multi-view data
analysis, analogous to the development of MedLDA [31], which is directed and has single-view. For
brevity, we consider the general multi-class classification, as defined above.
3
3.1 Problem Definition
As in the log-linear model in Eq. (2), we assume that the discriminant function F (y, h; V) is linear,
that is, F (y, h; V) = V> f (h, y), where f and V are defined the same as above. For prediction, we
take the expectation over the latent variable H and define the prediction rule as
y ? := arg max Ep(h|x,z) [F (H, y; V)] = arg max V> Ep(h|x,z) [f (H, y)],
(3)
y
y
where the expectation can be efficiently computed with the factorized form of p(h|x, z) when x and
z are fully observed. If missing values exist in x or z, an inference procedure is needed to compute
the expectation of the missed components, as detailed below in Eq. (5).
Then, learning is to find an optimal V? that minimizes a loss function. Here, we minimize the hinge
loss, as used in SVMs. Given training data D = {(xd , zd , yd )}D
d=1 , the hinge loss of the predictive
rule (3) is
X
1
>
Rhinge (V) :=
D
d
max[?`d (y) ? V Ep(h|x,z) [?fd (y)]],
y
where ?`d (y) is a loss function that measures how different the prediction y is compared to the
true label yd , and Ep(h|x,z) [?fd (y)] = Ep(h|x,z) [f (Hd , yd )] ? Ep(h|x,z) [f (Hd , y)]. It can be proved
P
1
?
that the hinge loss is an upper bound of the empirical loss Remp := D
d ?`(yd ). Applying the
principle of regularized risk minimization, we define the learning problem as solving
1
(4)
min L(?) + C1 kVk22 + C2 Rhinge (V),
?,V
2
P
where L(?) := ? d log p(xd , zd ) is the negative data likelihood and C1 and C2 are non-negative
constants, which can be selected via cross-validation. Note that Rhinge is also a function of ?.
Since problem (4) jointly maximizes the data likelihood and minimizes a training loss, it can be
expected that by solving this problem we can find a predictive latent space representation p(h|x, z)
and a prediction model parameter V, which on the one hand tend to predict as accurate as possible
on training data, while on the other hand tend to explain the data well.
3.2 Optimization
Variational approximation with Contrastive Divergence: Since the data likelihood L(?) is
generally intractable to compute, our method is based on the efficient contrastive divergence
technique [16, 25, 26, 28]. Specifically, we derive a variational approximation Lv (q0 , q1 ) of the
negative log-likelihood L(?) , that is:
Lv (q0 , q1 ) := R(q0 (x, z, h), p(x, z, h)) ? R(q1 (x, z, h), p(x, z, h)),
where R(q, p) is the relative entropy, and q0 is a variational distribution with x and z clamped to
their observed values while q1 is a distribution with all variables free. For q(q0 or q1 ) in general, we
make the structured mean field assumption [27] that 1 q(x, z, h) = q(x)q(z)q(h).
Solving the approximate problem: Applying the variational approximation Lv in problem (4), we
get an approximate objective function L(?, V, q0 , q1 ). Then, we can develop an alternating minimization method, which iteratively minimizes L(?, V, q0 , q1 ) over q0 and (?, V). The distribution
q1 is reconstructed once the optimal q0 is achieved, see [25] for details.
The problem of solving q0 and q1 is the posterior inference problem. Specifically, for a variational
distribution q (can be q0 or q1 ) in general, we keep (?, V) fixed and update each marginal as
Y
q(x) = p(x|Eq(H) [H]), q(z) = p(z|Eq(H) [H]), and q(h) =
p(hk |Eq(X) [X], Eq(Z) [Z]). (5)
k
For q0 , (x, z) are clamped at their observed values, and only q0 (h) is updated, which can be very
efficiently done because of its factorized form. The distribution q1 is achieved by performing the
above updates starting from q0 . Several iterations can yield a good q1 . Again, we can see that both
q(x) and q(z) are CRFs, with the expectation of H as the condition. Therefore, for linear-chain
models, we can use a message passing scheme [18] to infer their marginal distributions, as needed
for parameter estimation and view-level prediction (e.g., image annotation), as we shall see. For
generally structured models, approximate inference techniques [22] can be applied.
After we have inferred q0 and q1 , parameter estimation can be done by alternating between
(1) estimating V with ? fixed: this problem is learning a multi-class SVM [11], which can be
1
The parametric form assumptions of q, as made in previous work [28, 29], are not needed.
4
efficiently done with existing solvers; and (2) estimating ? with V fixed: this can be solved with
sub-gradient descent, where the sub-gradient is computed as:
??i =?Eq0 [?(xi , xi+1 )] + Eq1 [?(xi , xi+1 )],
??j =?Eq0 [?(zj , zj+1 )] + Eq1 [?(zj , zj+1 )],
??k =?Eq0 [?(hk )] + Eq1 [?(hk )],
P
?Eq0 [hk ]
1
?Wik =?Eq0 [?(xi , xi+1 )?(hk )> ]+Eq1 [?(xi , xi+1 )?(hk )> ]?C2 D
?d k ? Vyd k ) ?Wk ,
d (Vy
i
P
?Eq0 [hk ]
1
?Ukj =?Eq0 [?(zj , zj+1 )?(hk )> ]+Eq1 [?(zj , zj+1 )?(hk )> ]?C2 D
?d k ? Vyd k ) ?Uk ,
d (Vy
j
where y?d = arg maxy [?`d (y) + V> Eq0 [f (Hd , y)] is the loss-augmented prediction, and the expectation Eq0 [?(xi , xi+1 )] is actually the count frequency of ?(xi , xi+1 ), likewise for Eq0 [?(zj , zj+1 )].
Note that in our integrated max-margin formulation, the sub-gradients of W and U contain an
additional term (i.e., the third term) compared to the standard DWH [28] with contrastive divergence
approximation. This additional term introduces a regularization effect to the latent subspace model.
If the prediction label yd differs from the true label y?d , this term will be non-zero and it biases the
model towards discovering a better representation for prediction.
4 Application to Image Classification, Annotation and Retrieval
We have developed the large-margin framework with a generic multi-view latent space MN to model
structured data. In order to carefully examine the basic learning principle and compare with existing
work, in this paper, we concentrate on a simplified but very rich case that the data on each view
are not structured, which has been extensively studied in EFH [26, 28, 29] for image classification,
annotation and retrieval. We denote the specialized model by MMH (max-margin Harmonium).
In theory, extensions to model structured multi-view data can be easily done under the general
framework, and the only needed change is on the step of inferring q1 , which can be treated as a
black box, given the wide literature on approximate inference [22]. We defer the systematical study
in this direction to the full extension of this work.
Specifically, we consider two-views, where x is a vector of discrete word features (e.g., image tags)
and z is a vector of real-valued features (e.g., color histograms). Each xi is a Bernoulli variable
that denotes whether the ith term of a dictionary appears or not in an image, and each zj is a real
number that denotes the normalized color histogram of an image. We assume that each real-valued
hk follows a univariate Gaussian distribution. Therefore, we define the conditional distributions as
p(xi=1|h) =
1
, p(zj |h) = N (zj |?j2 (?j+Uj? h), ?j2 ), p(hk |x, z) = N (hk |x>W?k+z>U?k , 1),
1 + e?(?i+Wi? h)
where Wi? and W?k denote the ith row and kth column of W, respectively. Alike for Ui? and U?k .
With the above definitions, we can follow exactly the same procedure as above to do parameter
estimation. For the step of inferring q0 and q1 , the distributions of x, z and h are all fully factorized.
Therefore, the sub-gradients can be easily computed. Details are deferred to the Appendix.
Testing: For classification and retrieval, we need to infer the posterior distribution of H and its
expectation. In this case, we have Ep(h|x,z) [H] = v, where vk = x> W?k + z> U?k , ?1 ? k ? K.
Therefore, the classification rule is y ? = arg maxy V> f (v, y). For retrieval, the expectation v of
each image is used to compute a similarity (e.g., cosine) between images. For annotation, we use
x to represent tags, which are observed in training. In testing, we infer the posterior distribution
p(x|z), which can be approximately computed by running the update equations (5) with z clamped
at its observed values. Then, tags with high probabilities are selected as annotation.
5 Experiments
We report empirical results on TRECVID2003 and flickr image datasets. Our results demonstrate
that the large-margin approach can achieve significantly better performance on discovering predictive subspace representations and the tasks of image classification, annotation and retrieval.
5.1 Datasets and Features
The first dataset is the TRECVID2003 video dataset [28], which contains 1078 manually labeled
video shots that belong to 5 categories. Each shot is represented as a 1894-dim vector of text features
5
60
50
Avg-KL = 0.605
1
2
3
4
5
40
20
60
Avg-KL = 0.319
1
2
3
4
5
40
30
20
1
2
3
4
5
20
10
0
Avg-KL = 0.198
40
0
0
?20
?20
?10
?20
?40
?40
?30
?60
?60
?40
?80
?50
?40
?30
?20
?10
0
10
20
30
40
?50
?80
?60
?40
?20
0
20
40
60
?80
?40
?30
?20
?10
0
10
20
30
40
50
Figure 2: t-SNE 2D embedding of the discovered latent space representation by (Left) MMH, (Middle) DWH
and (Right) TWH on the TRECVID video dataset (Better viewed in color).
and a 165-dim vector of HSV color histogram, which is extracted from the associated keyframe. We
evenly split this dataset for training and testing. The second one is a subset selected from NUSWIDE [10], which is a big image dataset constructed from flickr web images. This dataset contains
3411 images about 13 animals, including cat, tiger, etc. See Fig. 6 for example images for each
category. For each image, six types of low-level features [10] are extracted, including 634-dim real
valued features (i.e., 64-dim color histogram, 144-dim color correlogram, 73-dim edge direction
histogram, 128-dim wavelet texture and 225-dim block-wise color moments) and 500-dim bag-ofword representation based on SIFT [19] features. We randomly select 2054 images for training and
use the rest for testing. The online tags are also downloaded for evaluating image annotation.
5.2 Discovering Predictive Latent Subspace Representations
We first evaluate the predictive power of the discovered latent subspace representations.
Fig. 2 shows the 2D embedding of the discovered 10-dim latent representations by three models
(i.e., MMH, DWH and TWH) on the video data. Here, we use the t-SNE algorithm [21] to find
the embedding. We can see that clearly the latent subspace representations discovered by the largemargin based MMH show a strong grouping pattern for the images belonging to the same category,
while images from different categories tend to be separated from each other on the 2D embedding
space. In contrast, the latent subspace representations discovered by the likelihood-based unsupervised DWH and supervised TWH do not show a clear grouping pattern, except for the first category.
Images from different categories tend to mix together. These observations suggest that the largemargin based latent subspace model can discover more predictive or discriminative latent subspace
representations, which will result in better prediction performance, as we shall see.
To quantitatively evaluate the predictiveness of the discovered latent subspace representations, we
compute the pair-wise average KL-divergence between the per-class average distribution over latent
topics2 . As shown on the top of each plot in Fig. 2, the large-margin based MMH obtains a much
larger average KL-divergence than the other likelihood-based methods. This again suggests that
the latent subspace representations discovered by MMH are more discriminative or predictive. We
obtain the similar observations and conclusions on the flickr dataset (see Fig. 3 for some example
topics), where the average KL-divergence scores of 60-topic MMH, DWH and TWH are 3.23, 2.56
and 0.463, respectively.
Finally, we examine the predictive power of discovered latent topics. Fig. 3 shows five example
topics discovered by the large-margin MMH on the flickr image data. For each topic Hk , we show
the 5 top-ranked images that yield a high expected value of Hk , together with the associated tags.
Also, to qualitatively visualize the discriminative power of each topic among the 13 categories, we
show the average probability of each category distributed on the particular topic. From the results,
we can see that many of the discovered topics are very predictive for one or several categories. For
example, topics 3 and 4 are discriminative in predicting the categories hawk and whales, respectively.
Similarly, topics 1 and 5 are good at predicting squirrel and zebra, respectively. We also have some
topics which are good at discriminating a subset of categories against another subset. For example,
the topic 2 is good at discriminating {squirrel, wolf, rabbit} against {tiger, whales, zebra}; but it is
not very discriminative between squirrel and wolf.
2
To compute this score, we first turn the expected value of H to be non-negative by subtracting each element
by the smallest value and then normalize it into a distribution over the K topics. The per-class average is
computed by averaging the topic distributions of the images within the same class. For a pair of distributions p
and q, the average KL-divergence is 1/2(R(p, q) + R(q, p)).
6
0.01
wolf, alaska, animal, nature, wildlife, africa, squirrel
wolf
0.02
ocean, boat, animal, wildlife, diving, sea, sydney, pacific, blue
0.015
zebra
squirrel
0.025
tiger
zebra
elephant
probability
0.03
tiger
0.035
antler
0.04
whales
0.012
0.045
Topic 4
snake
0.014
hawk, bird, flying, wildlife, wings, nature, fabulous, texas
zebra
0.016
tiger
rabbit
hawk
0.018
wolf
probability
0.02
antler
0.022
cat
0.005
0.024
Topic 3
zebra
0.015
whales
elephant
antler
elephant
whales
cow
rabbit
hawk
snake
0.02
tiger
snake
hawk
lion
0.025
antler
squirrel
lion
antler
snake
lion
hawk
cow
lion
cat
elephant
cow
cat
probability
rabbit
0.03
Topic 2
cat
squirrel
wolf
0.014
0.04
0.035
rabbit
0.015
squirrel, nature, animal, wildlife, rabbit, cute, bunny, interestingness
cow
hawk
whales
cat
snake
wolf
squirrel
0.016
rabbit
probability
0.017
lion
0.018
Topic 1
tiger
0.019
0.005
cow
elephant
whales
0.01
zebra, zoo, animal, stripes, africa, mammal, black, white, nature, eyes
wolf
probability
0.015
squirrel
0.02
Topic 5
zebra
0.025
Figure 3: Example latent topics discovered by a 60-topic MMH on the flickr animal dataset.
5.3 Prediction Performance on Image Classification, Retrieval, and Annotation
5.3.1 Classification
We first compare the MMH with SVM, DWH, TWH, Gaussian Mixture (GM-Mix), Gaussian Mixture LDA (GM-LDA), and Correspondence LDA (CorrLDA) on the TRECVID data. See [4] for
the details of the last three models. We use the SV M struct 3 to solve the sub-step of learning V in
MMH and build an SVM classifier, which uses both the text and color histogram features without
distinguishing them in different views. For each of the unsupervised DWH, GM-Mix, GM-LDA and
CorrLDA, a downstream SVM is built with the same tool based on the discovered latent representations. Fig. 4 (a) shows the classification accuracy of different models, where CorrLDA is omitted
because of its too low performance. We can see that the max-margin based multi-view MMH performs consistently better than any other competitors. In contrast, the likelihood-based TWH does
not show any conclusive improvements compared to the unsupervised DWH. These results show
that supervised information can help in discovering predictive latent space representations that are
more suitable for prediction if the model is appropriately learned, e.g., by using the large-margin
method. The superior performance of MMH compared to the flat SVM demonstrates the usefulness
of modeling multi-view inputs for prediction. The reasons for the inferior performance of other
models (e.g., CorrLDA and GM-Mix) are analyzed in [28, 29].
Fig. 4 (b) shows the classification accuracy on the flickr animal dataset. For brevity, we compare
MMH only with the best performed DWH, TWH and SVM. For these methods, we use the 500dim SIFT and 634-dim real features, which are treated as two views of inputs for MMH, DWH
and TWH. Also, we compare with the single-view MedLDA [31], which uses SIFT features only.
To be fair, we also evaluate a version of MMH that uses SIFT features, and denote it by MMH
(SIFT). Again, we can see that the large-margin based multi-view MMH performs much better than
any other methods, including SVM which ignores the presence of multi-view features. For the
single-view MMH (SIFT), it performs comparably (slightly better than) with the large-margin based
MedLDA, which is a directed BN. With the similar large-margin principle, MMH is an important
extension of MedLDA to the undirected latent subspace models and for multi-view data analysis.
5.3.2 Retrieval
For image retrieval, each test image is treated as a query and training images are ranked based on
their cosine similarity with the given query, which is computed based on latent subspace representations. An image is considered relevant to the query if they belong to the same category. We evaluate
the retrieval results by computing the average precision (AP) score and drawing precision-recall
curves. Fig. 4 (c) compares MMH with four other models when the topic number changes. Here,
3
http://svmlight.joachims.org/svm multiclass.html
7
0.6
0.55
0.5
0.45
5
10
15
20
25
30
35
40
0.45
0.4
0.45
0.4
0.35
0.3
0.3
10
20
30
40
50
60
# of latent topics
(a)
TWH
GM?Mix
15 topics
0.6
0.5
0.35
# of latent topics
Average Precision
0.55
0.5
DWH
0.5
0.4
0.3
0
10
20
30
# of latent topics
40
0.2
(b)
GM?LDA
20 topics
0.6
Precision
0.65
0.55
MMH
Precision
classification accuracy
classification accuracy
0.7
MMH
DWH
TWH
MMH(SIFT)
MEDLDA(SIFT)
SVM
0.6
Average Precision
MMH
DWH
TWH
GM?Mix
GM?LDA
SVM
0.8
0.75
0.5
0.4
0.3
0
0.5
Recall
1
0.2
0
0.5
Recall
1
(c)
Figure 4: Classification accuracy on the (a) TRECVID 2003 and (b) flickr datasets and (c) the average precision curve and the two precision-recall curves for image retrieval on TRECVID data.
squirrel
squirrel
animal
nature
cow
cat
rabbit
cute
wolf
cow
nature
green
animal
animal
zoo
snake
wolf
zoo
tiger
animal
snake
wolf
ocean
India
ocean
water
marine
Australia
hawk
bird
cat
antlers
animal
wildlife
deer
snake
ocean
adorable
animal
nature
zoo
wolf
animal
zebra
zebra
nature
wildlife
lion
animal
zoo
elephant
ocean
marine
rabbit
bunny
zebra
nature
lion
animal
squirrel
nature
wildlife
animal
cat
wolf
zoo
animal
rabbit
antlers
hawk
hawk
bird
wildlife
lion
lion
hawk
squirrel
wolf
whales
snake
snake
nature
cloudy
lion
animal
tiger
elephant
wildlife
nature
elephant
cat
nature
cat
wildlife kitten
elephant squirrel
wolf
green
landscape
macro
flower
squirrel
rabbit
bunny
cat
cute
wolf
cat
cow
cat
ocean
kitten
wolf
cats
aquarium
zebra
antlers
nature
animal
Figure 6: Example images from the 13 categories on the flickr animal dataset with predicted annotations. Tags
in blue are correct annotations while red ones are wrong predictions. The other tags are neutral.
we show the precision-recall curves when the topic number is set at 15 and 20. We can see that
for the AP measure, MMH outperforms all other methods in most cases, and MMH consistently
outperforms all the other methods in the measure of precision-recall curve. On the flickr dataset, we
have similar observations. The AP scores of the 60-topic MMH, DWH, and TWH are 0.163, 0.153
and 0.158, respectively. Due to space limitation, we defer the details to a full extension.
5.3.3 Annotation
F 1@1
F 1@2
F 1@3
F 1@4
F 1@5
F 1@6
F 1@7
MMH
0.165
0.221
0.245
0.258
0.262
0.259
0.256
DWH
0.144
0.186
0.202
0.208
0.210
0.208
0.206
TWH
0.145
0.192
0.218
0.228
0.236
0.240
0.239
sLDA
0.077
0.124
0.146
0.159
0.169
0.171
0.175
Finally, we report the annotation results on the flickr dataset, with
a dictionary of 1000 unique tags. The average number of tags
per image is about 4.5. We compare MMH with DWH and TWH
with two views of inputs?X for tag and Z for all the 634-dim
real-valued features. We also compare with the sLDA annotation
Figure 5: Top-N F1-measure.
model [24], which uses SIFT features and tags as inputs. We use
the top-N F1-measure [24], denoted by F 1@N . With 60 latent topics, the top-N F-measure scores
are shown in Fig. 5. We can see that the large-margin based MMH significantly outperforms all the
competitors. Fig. 6 shows example images from all the 13 categories, where for each category the
left image is generally of a good annotation quality and the right one is relatively worse.
6 Conclusions and Future Work
We have presented a generic large-margin learning framework for discovering predictive latent subspace representations shared by structured multi-view data. The inference and learning can be efficiently done with contrastive divergence methods. Finally, we concentrate on a specialized model
with applications to image classification, annotation and retrieval. Extensive experiments on real
video and web image datasets demonstrate the advantages of large-margin learning for both prediction and predictive latent subspace discovery. In future work, we plan to systematically investigate
the large-margin learning framework on structured multi-view data analysis, e.g., on text mining [23]
and computer vision [15] applications.
8
Acknowledgments
This work was done while N. Chen was a visiting researcher at CMU under a CSC fellowship and supports
from Chinese NSF Grants (No. 60625304, 90716021, 61075027), the National Key Project for Basic Research
of China (Grants No. G2007CB311003, 2009CB724002). J. Zhu and E. P. Xing are supported by ONR
N000140910758, NSF IIS-0713379, NSF Career DBI-0546594, and an Alfred P. Sloan Research Fellowship.
References
[1] S. Akaho. A kernel method for canonical correlation analysis. In IMPS, 2001.
[2] K. Ando and T. Zhang. Two-view feature generation model for semi-supervised learning. In ICML, 2007.
[3] F. R. Bach and M. I. Jordan. A probabilistic interpretation of canonical correlation analysis. Technical
report, Technical Report 688, Dept. of Statistics. University of California, 2005.
[4] D. M. Blei and M. I. Jordan. Modeling annotated data. In ACM SIGIR, pages 127?134, 2003.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. JMLR, 3:993?1022, 2003.
[6] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-trainnig. In COLT, 1998.
[7] U. Brefeld and T. Scheffer. Co-EM support vector learning. In ICML, 2004.
[8] K. Chaudhuri, S. M. Kakade, K. Livescu, and K. Sridharan. Multi-view clustering via canonical correlation analysis. In ICML, 2009.
[9] C. M. Christoudias, R. Urtasun, and T. Darrell. Multi-view learning in the presence of view disagreement.
In UAI, 2008.
[10] T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y.-T. Zheng. NUS-WIDE: A real-world web image
database from national university of singapore. In CIVR, 2009.
[11] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. JMLR, (2):265?292, 2001.
[12] M. Culp, G. Michailidis, and K. Johnson. On multi-view learning with additive models. Annals of Applied
Statistics, 3(1):292?318, 2009.
[13] T. Diethe, D. R. Hardoon, and J. Shawe-Taylor. Multiview fisher discriminant analysis. In NIPS Workshop
on Learning from Multiple Sources, 2008.
[14] D. Foster, S. Kakade, and T. Zhang. Multi-view dimensionality reduction via canonical correlation analysis. Technical report, Technical Report TR-2008-4, TTI-Chicago, 2008.
[15] D. G?
okalp and S. Aksoy. Scene classification using bag-of-regions representations. In CVPR, 2007.
[16] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1771?1800, 2002.
[17] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321?377, 1936.
[18] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML, 2001.
[19] D. G. Lowe. Object recognition from local scale-invariant features. In CVPR, 1999.
[20] R. Salakhutdinov and G. E. Hinton. Replicated softmax: an undirected topic model. In NIPS, 2009.
[21] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. JMLR, 9:2579?2605, 2008.
[22] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1?2):1?305, 2008.
[23] H. M. Wallach. Topic modeling: Beyond bag-of-words. In ICML, 2006.
[24] C. Wang, D. M. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. In CVPR, 2009.
[25] M. Welling and G. E. Hinton. A new learning algorithm for mean field boltzmann machines. In ICANN,
2001.
[26] M. Welling, M. Rosen-Zvi, and G. E. Hinton. Exponential family harmoniums with an application to
information retrieval. In NIPS, pages 1481?1488, 2004.
[27] E. P. Xing, M. I. Jordan, and S. Russell. A generalized mean field algorithm for variational inference in
exponential families. In UAI, 2003.
[28] E. P. Xing, R. Yan, and A. G. Hauptmann. Mining associated text and images with dual-wing harmoniums.
In UAI, 2005.
[29] J. Yang, Y. Liu, E. P. Xing, and A. G. Hauptmann. Harmonium models for semantic video representation
and classification. In SDM, 2007.
[30] J. Zhang, Z. Ghahramani, and Y. Yang. Flexible latent variable models for multi-task learning. Machine
Learning, 73(3):221?242, 2008.
[31] J. Zhu, A. Ahmed, and E. P. Xing. MedLDA: Maximum margin supervised topic models for regression
and classification. In ICML, 2009.
9
| 4128 |@word middle:1 version:3 efh:4 dwh:19 bn:1 contrastive:6 q1:15 mammal:1 tr:1 shot:2 tnlist:1 reduction:1 moment:1 liu:1 contains:2 score:5 africa:2 existing:3 outperforms:3 z2:1 surprising:1 luo:1 csc:1 partition:2 additive:1 chicago:1 designed:1 plot:1 update:3 discovering:10 selected:3 accordingly:2 mccallum:1 ith:2 marine:2 chua:1 blei:3 provides:1 hsv:1 org:1 zhang:3 five:1 constructed:2 c2:4 ik:1 consists:1 combine:1 ofword:1 pairwise:1 sacrifice:1 expected:3 examine:2 multi:39 salakhutdinov:1 considering:1 solver:1 hardoon:1 provided:1 discover:5 underlying:3 project:1 estimating:2 moreover:1 factorized:4 maximizes:2 minimizes:4 developed:1 finding:1 nj:1 xd:2 exactly:1 biometrika:1 classifier:2 demonstrates:1 uk:1 wrong:1 grant:2 arguably:1 interestingness:1 segmenting:1 engineering:1 local:3 tsinghua:2 analyzing:1 solely:2 yd:5 approximately:1 black:2 ap:3 bird:3 china:2 studied:1 wallach:1 suggests:1 co:3 ease:1 directed:5 unique:1 acknowledgment:1 testing:5 block:1 differs:1 procedure:2 empirical:3 yan:1 mult:1 significantly:2 word:4 integrating:1 suggest:1 get:1 cannot:1 unlabeled:1 risk:1 applying:2 seminal:1 deterministic:1 missing:1 maximizing:1 crfs:1 graphically:1 starting:1 rabbit:11 sigir:1 estimator:1 rule:3 dbi:1 hd:3 embedding:4 analogous:1 updated:1 annals:1 gm:9 us:4 distinguishing:1 livescu:1 pa:1 element:2 trend:1 recognition:1 particularly:1 jk:1 stripe:1 labeled:2 database:1 observed:6 ep:7 solved:2 wang:1 region:2 connected:2 ordering:1 russell:1 yk:1 complexity:3 ui:1 n000140910758:1 solving:4 harmonium:7 predictive:22 flying:1 eric:1 easily:3 joint:6 represented:1 cat:15 train:1 separated:1 distinct:1 query:3 labeling:1 deer:1 aquarium:1 whose:1 slda:2 widely:3 valued:4 larger:1 solve:1 drawing:1 elephant:9 cvpr:3 ability:1 statistic:2 jointly:2 online:1 eq0:10 advantage:2 brefeld:1 sequence:1 sdm:1 propose:1 subtracting:1 product:1 zm:2 macro:1 j2:2 relevant:1 combining:2 loop:1 chaudhuri:1 achieve:2 christoudias:1 normalize:1 darrell:1 sea:1 tti:1 object:1 help:1 depending:1 develop:5 derive:1 school:1 advocated:1 eq:8 strong:1 sydney:1 c:2 predicted:1 come:1 concentrate:3 ning:1 direction:2 correct:1 annotated:1 australia:1 f1:2 civr:1 alleviate:1 exploring:3 extension:6 squirrel:16 predictiveness:2 considered:1 ground:1 exp:9 algorithmic:1 predict:1 visualize:1 dictionary:2 smallest:1 omitted:1 estimation:9 bag:4 label:5 tool:1 minimization:2 clearly:1 gaussian:3 fulfill:1 ck:3 joachim:1 improvement:2 vk:1 bernoulli:1 likelihood:11 consistently:2 hk:27 contrast:2 detect:1 dim:13 inference:13 integrated:1 snake:10 kernelized:1 relation:1 interested:1 arg:4 classification:26 flexible:1 among:2 dual:2 denoted:2 html:1 development:1 plan:1 spatial:1 special:3 animal:21 softmax:1 marginal:4 field:7 once:1 ng:1 manually:1 whale:8 unsupervised:8 icml:6 imp:1 future:2 rosen:1 others:1 report:6 quantitatively:1 randomly:1 divergence:10 national:2 ando:1 message:2 fd:2 mining:2 investigate:1 zheng:1 deferred:1 introduces:1 mixture:2 analyzed:1 chain:3 accurate:1 edge:1 alaska:1 trainnig:1 taylor:1 increased:1 column:1 modeling:3 stacking:1 introducing:1 subset:3 neutral:1 usefulness:1 successful:1 johnson:1 too:1 motivating:1 reported:1 zvi:1 sv:1 density:1 discriminating:2 probabilistic:5 together:3 again:3 worse:1 expert:1 wing:4 li:1 potential:1 sec:5 wk:1 junzhu:1 sloan:1 depends:1 performed:1 view:73 h1:1 lab:2 lowe:1 red:1 xing:6 start:1 annotation:23 npi:1 defer:2 minimize:1 accuracy:5 efficiently:6 likewise:1 yield:4 landscape:1 rhinge:3 weak:3 bayesian:1 comparably:1 zoo:6 researcher:1 explain:1 simultaneous:1 flickr:10 definition:4 against:2 competitor:2 frequency:1 resultant:1 associated:4 couple:1 proved:1 dataset:12 remp:1 mitchell:1 recall:6 color:8 dimensionality:1 carefully:1 actually:1 appears:1 supervised:15 follow:1 reflected:1 response:6 specify:1 improved:1 formulation:1 done:10 box:1 correlation:5 hand:2 web:6 defines:1 lda:8 quality:2 scientific:1 usa:1 effect:1 contain:2 normalized:2 true:2 regularization:1 alternating:2 q0:16 iteratively:1 semantic:1 white:1 conditionally:2 visualizing:1 inferior:1 cute:3 cosine:2 criterion:1 hong:1 generalized:1 presenting:1 multiview:2 crf:1 demonstrate:3 performs:3 image:51 variational:8 consideration:1 wise:2 superior:1 specialized:2 belong:2 interpretation:1 mellon:1 significant:1 zebra:12 similarly:2 akaho:1 shawe:1 similarity:2 etc:2 posterior:6 systematical:1 diving:1 incapable:1 onr:1 der:1 wildlife:10 additional:6 semi:2 ii:1 multiple:4 full:2 mix:6 infer:3 technical:4 ahmed:1 cross:1 bach:1 retrieval:15 mle:2 prediction:21 basic:2 regression:1 vision:2 cmu:2 expectation:8 iteration:1 normalization:1 histogram:6 represent:1 kernel:2 achieved:2 c1:2 bunny:3 conditionals:1 fellowship:2 source:1 appropriately:1 rest:1 tend:4 undirected:6 trecvid2003:2 lafferty:1 sridharan:1 jordan:5 presence:3 svmlight:1 yang:2 split:1 independence:7 variate:1 michailidis:1 cow:8 idea:2 cn:1 multiclass:2 texas:1 whether:1 six:1 effort:1 passing:2 useful:1 generally:3 detailed:1 clear:1 extensively:2 svms:2 category:17 http:1 exist:1 canonical:5 zj:21 shifted:1 vy:4 nsf:3 singapore:1 per:3 blue:2 diverse:1 zd:2 carnegie:1 write:2 discrete:3 shall:3 medlda:6 alfred:1 key:2 four:1 blum:1 verified:1 backward:1 downstream:1 beijing:1 family:5 missed:1 decision:1 appendix:1 maaten:1 cca:2 bound:1 pay:1 correspondence:1 fei:2 x2:1 fda:1 flat:1 scene:1 tag:12 aspect:1 cloudy:1 bns:1 min:1 performing:2 relatively:1 structured:9 pacific:1 belonging:1 slightly:1 em:1 wi:3 kakade:2 alike:1 maxy:2 largemargin:2 invariant:1 equation:1 turn:1 count:1 needed:5 singer:1 available:3 apply:1 generic:3 disagreement:2 ocean:6 hotelling:1 struct:1 denotes:2 clustering:3 dirichlet:2 running:1 top:5 graphical:1 hinge:4 ghahramani:1 uj:3 especially:1 build:1 chinese:1 objective:2 parametric:1 visiting:1 amongst:1 gradient:4 subspace:28 kth:1 nx:3 evenly:1 topic:35 mail:1 evaluate:5 discriminant:3 urtasun:1 reason:1 water:1 relationship:2 minimizing:1 difficult:1 sne:3 stated:1 negative:4 constructively:1 implementation:1 motivates:1 boltzmann:1 perform:3 upper:1 observation:5 markov:6 datasets:5 descent:1 defining:2 hinton:5 discovered:15 inferred:1 pair:2 kl:7 extensive:1 z1:1 conclusive:2 california:1 learned:1 nu:1 nip:3 beyond:1 usually:1 below:1 pattern:2 lion:10 flower:1 built:2 max:6 including:3 video:8 green:2 wainwright:1 power:3 suitable:1 natural:2 treated:3 regularized:1 predicting:3 ranked:2 boat:1 zhu:3 mn:14 wik:3 improve:1 scheme:2 epxing:1 kvk22:1 eye:1 concludes:1 jun:1 coupled:1 extract:1 naive:1 text:4 literature:1 discovery:1 relative:1 loss:9 fully:2 mmh:31 generation:1 limitation:1 allocation:2 lv:3 triple:1 validation:1 foundation:1 integrate:1 downloaded:1 principle:5 foster:1 systematically:1 row:1 reversely:1 compatible:1 supported:1 last:1 free:1 bias:1 weaker:2 india:1 wide:3 taking:1 distributed:1 van:1 boundary:1 curve:5 xn:2 evaluating:1 world:1 rich:1 ignores:1 forward:1 made:1 avg:3 projected:1 simplified:1 qualitatively:1 replicated:1 welling:2 reconstructed:1 approximate:5 obtains:1 ignore:1 keyframe:1 keep:1 uai:3 pittsburgh:1 xi:24 discriminative:8 continuous:1 latent:67 promising:1 learn:3 nature:14 career:1 improving:1 complex:1 domain:2 inherit:1 icann:1 big:1 fair:1 x1:1 augmented:1 fig:11 referred:1 scheffer:1 trecvid:4 precision:10 sub:6 inferring:4 pereira:1 exponential:7 clamped:3 jmlr:3 third:1 learns:1 ix:1 wavelet:1 tang:1 sift:9 svm:11 grouping:2 essential:1 intractable:1 workshop:1 effectively:1 texture:1 eq1:5 hauptmann:2 conditioned:2 corrlda:4 margin:28 chen:2 entropy:1 univariate:1 correlogram:1 corresponds:1 wolf:17 ukj:2 extracted:3 acm:1 conditional:14 goal:2 presentation:1 viewed:1 towards:1 shared:5 fisher:2 colt:1 hard:2 change:2 tiger:9 specifically:6 typical:2 except:1 averaging:1 select:1 support:3 fulfills:2 crammer:1 brevity:2 constructive:2 dept:2 hawk:11 kitten:2 |
3,455 | 4,129 | A Bayesian Approach to Concept Drift
Stephen H. Bach Marcus A. Maloof
Department of Computer Science
Georgetown University
Washington, DC 20007, USA
{bach, maloof}@cs.georgetown.edu
Abstract
To cope with concept drift, we placed a probability distribution over the location
of the most-recent drift point. We used Bayesian model comparison to update
this distribution from the predictions of models trained on blocks of consecutive
observations and pruned potential drift points with low probability. We compare
our approach to a non-probabilistic method for drift and a probabilistic method
for change-point detection. In our experiments, our approach generally yielded
improved accuracy and/or speed over these other methods.
1
Introduction
Consider a classification task, in which the objective is to assign labels Y to vectors of one or more
attribute values X. To learn to perform this task, we use training data to model f : X ? Y ,
the unknown mapping from attribute values to labels, or target concept, in hopes of maximizing
classification accuracy. A common problem in online classification tasks is concept drift, which is
when the target concept changes over time. Identifying concept drift is often difficult. If the correct
label for some x is y1 at time step t1 and y2 at time step t2 , does this indicate concept drift or that
the training examples are noisy?
Researchers have approached drift in a number of ways. Schlimmer and Grainger [1] searched for
candidate models by reweighting training examples according to how well they fit future examples.
Some have maintained and modified partially learned models, e.g., [2, 3]. Many have maintained
and compared ?base? models trained on blocks of consecutive examples to identify those that are
the best predictors of new examples, e.g., [4, 5, 6, 7, 8]. We focus on this approach. Such methods
address directly the uncertainty about the existence and location of drift.
We propose using probability theory to reason about this uncertainty. A probabilistic model of drift
offers three main benefits to the research community. First, our experimental results show that a
probabilistic model can achieve new combinations of accuracy and speed on classification tasks.
Second, probability theory is a well-developed theory that could offer new insights into the problem
of concept drift. Third, probabilistic models can easily be combined in a principled way, and their
use in the machine-learning field continues to grow [9]. Therefore, our model could readily and
correctly share information with other probabilistic models or be incorporated into broader ones.
In this paper we present a probabilistic model of the number of most-recent training examples that
the active concept describes. Maximum-likelihood estimation would overfit the model by concluding that each training was generated by a different target concept. This is unhelpful for future predictions, since it eliminates all generalization from past examples to future predictions. Instead, we use
Bayesian model comparison [9], or BMC, to reason about the trade-offs between model complexity
(i.e., the number of target concepts) and goodness of fit. We first describe BMC and its application to
detecting change points. We then describe a Bayesian approach to concept drift. Finally, we show
the results of an empirical comparison among our method (pruned and unpruned), BMC for change
points, and Dynamic Weighted Majority [5], an ensemble method for concept drift.
1
2
Bayesian model comparison
BMC uses probability theory to assign degrees of belief to candidate models given observations and
)p(M )
prior beliefs [9]. By Bayes? Theorem, p(M |D) = p(D|M
, where M is the set of models
p(D)
under consideration and D is the set of observations. Researchers in Bayesian statistics have used
BMC to look for change points in time-series data. The goal of change-point detection is to segment
sequences of observations into blocks that are identically distributed and usually assumed to be
independent.
2.1
Previous work on Bayesian change-point detection
Barry and Hartigan [10, 11] used product partition models as distributions over possible segmentations of time-series data. Exact inference requires O(n3 ) time in the number of observations and
may be accurately approximated in O(n) time using Markov sampling [10]. In an online task, approximate training and testing on n observations would require O(n2 ) time, since the model must
be updated after new training data. These updates would require resampling and testing for convergence.
Fearnhead [12] showed how to perform direct simulation from the posterior distribution of a class
of multiple-change-point models. This method requires O(n2 ) time and avoids the need to use
Markov sampling and to test for convergence. Again, an approximate method can be performed in
approximately linear time, but the model must be regularly rebuilt in online tasks.
The computational costs associated with offline methods make it difficult to apply them to online
tasks. Researchers have also looked for online methods for change-point detection. Fearnhead and
Liu [13] introduced an online version of Fearnhead?s simulation method [12] which uses particle
filtering to quickly update the distribution over change points. Adams and MacKay [14] proposed
an alternative method for online Bayesian change-point detection. We now describe it in more detail,
since it will be the starting point for our own model.
2.2
A method for online Bayesian change-point detection
Adams and MacKay [14] proposed maintaining a discrete distribution over lt , the length in time
steps of the longest substrings of observations that are identically distributed, ending at time step
t. This method therefore models the location of only the most recent change point, a cost-saving
measure useful for many online problems.
A conditional prior distribution p(lt |lt?1 ) is used, such that
?
? ??1
if lt = 0;
p(lt |lt?1 ) =
1 ? ??1 if lt = lt?1 + 1;
? 0
otherwise.
(1)
In principle, a more sophisticated prior could be used. The crucial aspect is that, given that a substring is identically distributed, it assigns mass to only two outcomes: the next observation is distributed identically to the observations of the substring, or it is the first of a new substring.
The algorithm is initialized at time step 0 with a single base model that is the prior distribution over
observations. Initially, p(l0 = 0) = 1. Let Dt be the observation(s) made at time step t. At each
time step the algorithm computes a new posterior distribution p(lt |D1:t ) by marginalizing out lt?1
from
p(Dt |lt , D1:t?1 )p(lt |lt?1 )p(lt?1 |D1:t?1 )
p(lt , lt?1 |D1:t ) =
.
(2)
p(Dt |D1:t?1 )
This is a straightforward summation over a discrete variable.
To find p(lt , lt?1 |D1:t ), consider the three components in the numerator. First, p(lt?1 |D1:t?1 ) is the
distribution that was calculated at the previous time step. Next, p(lt |lt?1 ) is the prior distribution.
Since only two outcomes are assigned any mass, each element in p(lt?1 |D1:t?1 ) contributes mass
to only two points in the posterior distribution. This keeps the algorithm linear in the size of the ensemble. Finally, p(Dt |lt , D1:t?1 ) = p(Dt |Dt?lt :t?1 ). In other words, it is the predictive probability
2
of a model trained on the observations received from time steps t ? lt to t ? 1. The denominator
then normalizes the distribution.
Once this posterior distribution p(lt |D1:t ) is calculated, each model in the ensemble is trained on
the new observation. Then, a new model is initialized with the prior distribution over observations,
corresponding to lt+1 = 0.
3
Comparing conditional distributions for concept drift
We propose a new approach to coping with concept drift. Since the objective is to maximize classification accuracy, we want to model the conditional distribution p(Y |X) as accurately as possible.
Using [14] as a starting point, we place a distribution over lt , which now refers to the length in time
steps that the currently active concept has been active.
There is now an important distinction between BMC for concept drift and BMC for change points:
BMC for concept drift models changes in p(Y |X), whereas BMC for change points models changes
in the joint distribution p(Y, X). We use the conditional distribution to look for drift points because
we do not wish to react to changes in the marginal distribution p(X). A change point in the joint
distribution p(Y, X) could correspond to a change point in p(X), a drift point in p(Y |X), or both.
Reacting only to changes in p(Y |X) means that we compare models on their ability to classify
unlabeled attribute values, not generate those values.
In other words, we assume that neither the sequence of attribute values X1:t nor the sequence of
class labels Y1:t alone provide information about lt . Therefore p(lt |lt?1 , Xt ) = p(lt |lt?1 ) and
p(lt?1 |Y1:t?1 , X1:t ) = p(lt?1 |Y1:t?1 , X1:t?1 ). We also assume that examples from different concepts are independent. We use Equation 1 as the prior distribution p(lt |lt?1 ) [14]. Equation 2 is
replaced with
p(lt , lt?1 |Y1:t , X1:t ) =
p(Yt |lt , Y1:t?1 , X1:t )p(lt |lt?1 )p(lt?1 |Y1:t?1 , X1:t?1 )
.
p(Yt |Y1:t?1 , X1:t )
(3)
To classify unlabeled attribute values X with class label Y , the predictive distribution is
p(Y |X) =
t
X
p(Y |X, Y1:t , X1:t , lt = i)p(lt = i).
(4)
i=1
We call this method Bayesian Conditional Model Comparison (BCMC). If left unchecked, the size
of its ensemble will grow linearly with the number of observations. In practice, this is far too
computationally expensive for many online-learning tasks. We therefore prune the set of models
during learning. Let ? be a user-specified threshold for the minimum posterior probability a model
must have to remain in the ensemble. Then, if there exists some i such that p(lt = i|D1:t ) < ? <
p(lt = 0|lt?1 ), simply set p(lt = i|Dt ) = 0 and discard the model p(D|Dt?i:t ). We call this
modified method Pruned Bayesian Conditional Model Comparison (PBCMC).
4
Experiments
We conducted an empirical comparison using our implementations of PBCMC and BCMC. We hypothesized that looking for drift points in the conditional distribution p(Y |X) instead of change
points in the joint distribution p(Y, X) would lead to higher accuracy on classification tasks. To test
this, we included our implementation of the method of Adams and MacKay [14], which we refer to
simply as BMC. It is identical to BCMC, except that it uses Equation 2 to compute the posterior over
lt , where D ? (Y, X).
We also hypothesized that PBCMC could achieve improved combinations of accuracy and speed
compared to Dynamic Weighted Majority (DWM) [5], an ensemble method for concept drift that
uses a heuristic weighting scheme and pruning. DWM is a top performer on the problems we considered [5]. Like the other learners, DWM maintains a dynamically-sized, weighted ensemble of
models trained on blocks of examples. It predicts by taking a weighted-majority vote of the models?
predictions and multiplies the weights of those models that predict incorrectly by a constant ?. It
3
then rescales the weights so that maximum weight is 1. Then if the algorithm?s global prediction
was incorrect, it adds a new model to the ensemble with a weight of 1, and it removes any models
with weights below a threshold ?. In the cases of models which output probabilities, DWM considers
a prediction incorrect if a model did not assign the most probability to the correct label.
4.1
Test problems
We conducted our experiments using four problems previously used in the literature to evaluate
methods for concept drift The STAGGER concepts [1, 3] are three target concepts in a binary classification task presented over 120 time steps. Attributes and their possible values are shape ? {triangle,
circle, rectangle}, color ? {red, green, blue}, and size ? {small, medium, large}. For the first 40
time steps, the target concept is color = red ? size = small. For the next 40 time steps, the target
concept is color = green ? shape = circle. Finally, for the last 40 time steps, the target concept is
size = medium ? size = large. A number of researchers have used this problem to evaluate methods
for concept drift [4, 5, 3, 1]. Per the problem?s usual formulation, we evaluated each learner by
presenting it with a single, random example at each time step and then testing it on a set of 100
random examples, resampled after each time step. We conducted 50 trials.
The SEA concepts [8] are four target concepts in a binary classification task, presented over 50,000
time steps. The target concept changes every 12,500 time steps, and associated with each concept
is a single, randomly generated test set of 2,500 examples. At each time step, a learner is presented
with a randomly generated example, which has a 10% chance of being labeled as the wrong class.
Every 100 time steps, the learner is tested on the active concept?s test set. Each example consists
of numeric attributes xi ? [0, 10], for i = 1, . . . , 3. The target concepts are hyperplanes, such that
y = + if x1 + x2 ? ?, where ? ? {7, 8, 9, 9.5}, for each of the four target concepts, respectively;
otherwise, y = ?. Note that x3 is an irrelevant attribute. Several researchers have used a shifting
hyperplane to evaluate learners for concept drift [5, 6, 7, 2, 8]. We conducted 10 trials. In this
experiment, ?0 = 5.
The calendar-apprentice (CAP) data sets [15, 16] is a personal-scheduling task. Using a subset of 34
symbolic attributes, the task is to predict a user?s preference for a meeting?s location, duration, start
time, and day of week. There are 12 attributes for location, 11 for duration, 15 for start time, and
16 for day of week. Each learner was tested on the 1,685 examples for User 1. At each time step,
the learner was presented the next example without its label. After classifying it, it was then told the
correct label so it could learn.
The electricity-prediction data set consists of 45,312 examples collected at 30-minute intervals between 7 May 1996 and 5 December 1998 [17]. The task is to predict whether the price of electricity
will go up or down based on five numeric attributes: the day of the week, the 30-minute period of
the day, the demand for electricity in New South Wales, the demand in Victoria, and the amount
of electricity to be transferred between the two. About 39% of the examples have unknown values
for either demand in Victoria or the transfer amount. At each time step, the learner classified the
next example in temporal order before being given the correct label and using it to learn. In this
experiment, ?0 = 0.
4.2
Experimental design
We tested the learning methods on the four problems described. For STAGGER and SEA, we measured accuracy on the test set, then computed average accuracy and 95% confidence intervals at each
time step. We also computed the average normalized area under the performance curves (AUC) with
95% confidence intervals. We used the trapezoid rule on adjacent pairs of accuracies and normalized
by dividing by the total area of the region. We present both AUC under the entire curve and after the
first drift point to show both a learner?s overall performance and its performance after drift occurs.
For CAP and electricity prediction, we measured accuracy on the unlabeled observations.
All the learning methods used a model we call Bayesian Naive Bayes, or BNB, as their base models.
BNB makes the conditionally independent factor assumption
Qn (a.k.a. the ?naive Bayes? assumption) that the joint distribution p(Y, X) factors into p(Y ) i=1 p(Xi |Y ) [9]. It calculates values for
p(Y |X) as needed using Bayes? Theorem. It takes the Bayesian approach to probabilities (hence the
additional ?Bayes? in the name), meaning that it places distributions over the parameters that govern
4
Table 1: Results for (a) the STAGGER concepts and (b) the SEA concepts.
(a) STAGGER concepts
AUC
AUC
(overall)
(after drift)
0.912?0.005
0.891?0.005
0.891?0.005
0.884?0.005
0.878?0.005
0.647?0.008
0.914?0.007
0.885?0.007
0.885?0.007
0.876?0.008
0.868?0.007
0.516?0.011
Learner and Parameters
BNB , on each concept
PBCMC , ? = 20, ? = 10?4
BCMC , ? = 20
BMC , ? = 50
DWM , ? = 0.5, ? = 10?4
BNB , on all examples
(b) SEA concepts
Learner and Parameters
BNB , on each concept
DWM , ? = 0.9, ? = 10?3
BCMC , ? = 10, 000
PBCMC , ? = 10, 000, ? =
BMC , ? = 200
BNB , on all examples
10?4
AUC
AUC
(overall)
(after drift)
0.974?0.002
0.974?0.001
0.970?0.002
0.964?0.002
0.955?0.003
0.910?0.003
0.974?0.002
0.974?0.001
0.969?0.002
0.961?0.003
0.948?0.003
0.889?0.002
the distributions p(Y ) and p(X|Y ) into which p(Y, X) factors. In our experiments, BNB predicted
by marginalizing out the latent parameter variables to compute marginal likelihoods. Note that we
use BNB, a generative model over p(Y, X), even though we said that we wish to model p(Y |X) as
accurately as possible. This is to ensure a fair comparison with BMC which needs p(Y, X). We are
more interested in the effects of looking for changes in each distribution, not which is a better model
for the active concept.
In our experiments, BNB placed Dirichlet distributions [9] over the parameters ?~ of multinomial
distributions p(Y ) and p(Xi |Y ) when Xi was a discrete attribute. All Dirichlet priors assigned
~ BNB placed Normal-Gamma distributions [9] over the paequal density to all valid values of ?.
rameters ? and ? of normal distributions p(Xi |Y ) when Xi was a continuous attribute. p(?, ?) =
N (?|?0 , (??)?1 )Gam(?|a, b). The predictive distribution is then a Student?s t-distribution with
mean ? and precision ?. In all of our experiments, ? = 2 and a = b = 1. The value of ?0 is
specified for each experiment with continuous attributes.
We also tested BNB as a control to show the effects of not attempting to cope with drift and BNB
trained using only examples from the active concept (when such information was available) to show
possible accuracy given perfect information about drift.
Parameter selection is difficult when evaluating methods for concept drift. Train-test-and-validate
methods such as k-fold cross validation are not appropriate because the observations are ordered and not assumed to be identically distributed. We therefore tested each learner on each
problem using each of a set of values for each parameter. Due to limited space, we present
results for each learning method using the best parameter settings we found. We make no
claim that these parameters are optimal, but they are representative of the overall trends we observed. We performed this parameter search for all the learning methods. The parameters we
tested were ? ? {10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000}, ? ? {10?2 , 10?3 , 10?4 },
? ? {0.25, 0.5, 0.75, 0.9}, and ? ? {10?2 , 10?3 , 10?4 , 0}.
5
Table 2: Accuracy on the CAP and electricity data sets.
PBCMC
BCMC
BMC
DWM
? = 10, 000, ? = 10?4
? = 5, 000
? = 10
? = 0.75, ? = 10?4
63.74
63.15
38.40
51.81
54.27
63.92
63.03
39.17
51.81
54.48
63.15
64.10
35.19
51.22
53.41
65.76
66.35
37.98
51.28
55.34
? = 10, ? = 10?2
? = 10
? = 10
? = 0.25, ? = 10?3
85.32
85.33
65.37
82.31
Location
Duration
Start Time
Day of Week
Average
Electricity
4.3
BNB
62.14
62.37
32.40
51.22
52.03
62.44
Results and analysis
Table 1 shows the top results for the STAGGER and SEA concepts. On the STAGGER concepts, PBCMC
and BCMC performed almost identically and have a higher mean AUC than BMC, but their 95%
confidence intervals overlap. PBCMC and BCMC outperformed DWM. On the SEA concepts, DWM
was the top performer, matching the accuracy of BNB trained on each concept and outperforming all
the other learner methods. BCMC was next, followed by PBCMC, then BMC, and the BNB.
Table 2 shows the top results for the CAP and electricity data sets. DWM performed the best on the
location and duration data sets, while BCMC performed best on the start time and day-of-week data
sets. PBCMC matched the accuracy of BCMC on the day-of-week and duration data sets and came
close to it on the others. DWM had the highest mean accuracy over all four tasks, followed by PBCMC
and BCMC, then BMC, and finally BNB. BCMC performed the best on the electricity data set, closely
followed by PBCMC.
The first conclusion is clear: looking for changes in the conditional distribution p(Y |X) led to
better accuracy than looking for changes in the joint distribution p(Y, X). With the close exception
of the duration problem in the CAP data sets, PBCMC and BCMC outperformed BMC, sometimes
dramatically so. What is less clear is the relative merits of PBCMC and DWM. We now analyze these
learners to better understand address this question.
4.3.1
Reactivity versus stability
The four test problems can be partitioned into two subsets: those on which PBCMC was generally
more accurate (STAGGER and electricity) and those on which DWM was (SEA and CAP). We can
obtain further insight into what separates these two subsets by noting that both PBCMC and DWM can
be said to have ?strategies,? which are determined by their parameters. For PBCMC, higher values of
? mean that it will assign less probability initially to new models. For DWM, higher values of ? mean
that it will penalize models less for making mistakes. For both, lower values of ? and ? respectively
mean that they are slower to completely remove poorly performing models from consideration. We
can thus interpret these parameters to describe how ?reactive? or ?stable? the learners are, i.e., the
degree to which new observations can alter their hypotheses [4].
The two subsets are also partitioned by the strategy which was superior for the problems in each.
For both PBCMC and DWM, some of the most reactive parameterizations we tested were optimal on
STAGGER and electricity, but some of the most stable were optimal on SEA and CAP. Further, we
observed generally stratified results across parameterizations. For each problem, almost all of the
parameterizations of the top learner were more accurate than almost all of the parameterizations of
the other. This indicates that PBCMC was generally better for the concepts which favor reactivity,
whereas DWM was generally better for the concepts which favor stability.
4.3.2
Closing the performance gaps
We now consider why these gaps in performance exist and how they might be closed. Figure 1 shows
the average accuracies of PBCMC and DWM at each time step on the STAGGER and SEA concepts.
These are for the experiments reported in Table 1, so the parameters, numbers of trials, etc. are the
same. We present 95% confidence intervals at selected time steps for both. Figure 1 shows that the
6
100
90
98
80
96
Predictive Accuracy (%)
Predictive Accuracy (%)
100
70
60
50
40
94
92
90
88
-4
PBCMC, ? = 20, ? = 10-4
DWM, ? = 0.5, ? = 10
30
-4
PBCMC, ? = 10000, ? = 10-3
DWM, ? = 0.9, ? = 10
86
20
0
20
40
60
Time Step (t)
80
100
120
0
(a)
12500
25000
Time Step (t)
37500
50000
(b)
Figure 1: Average accuracy on (a) the STAGGER concepts and (b) the SEA concepts. See text for
details.
better performing learners in each problem were faster to react to concept drift. This shows that
DWM did not perform better on SEA simply by being more stable whether the concept was or not.
On the SEA concepts, PBCMC did perform best with the most stable parameterization we tried, but
its main problem was that it wasn?t reactive enough when drift occurred.
We first consider whether the problem is one of parameter selection. Perhaps we can achieve better
performances by using a more reactive parameterization of DWM on certain problems and/or a more
stable parameterization of PBCMC on other problems. Our experimental results cast doubt on this
proposition. For the problems on which PBCMC was superior, DWM?s best results were not obtained
using the most reactive parameterization. In other words, simply using an even more reactive parameterization of DWM did not improve performance on these problems. Further, on the duration
problem in the CAP data sets, PBCMC also achieved the reported accuracy using ? = 5000 and
? = 10?2 , and on the location problem it acheived negligibly better accuracy using ? = 5000 and
? = 10?3 or ? = 10?4 . Therefore, simply using an even more stable parameterization of PBCMC
did not improve performance on these problems either. BCMC, which is just PBCMC with ? = 0, did
outperform PBCMC on SEA. It reacted more quickly than PBCMC did, but not as quickly as DWM
did, and at a much greater computational cost, since it had to maintain every model in order to have
the one(s) which would eventually gain weight relative to the other models. BCMC also was not a
significant improvement over PBCMC on the location and duration problems.
We therefore theorize that the primary reason for the differences in performance between PBCMC
and DWM is their approaches to updating their ensembles, which determines how they react to drift.
PBCMC favors reactivity by adding a new model at every time step and decaying the weights of all
models by the degree to which they are incorrect. DWM favors stability by only adding a new model
after incorrect overall predictions and only decaying weights of incorrect models, and then only by
a constant factor. This is supported by the results on problems favoring reactive parameterizations
compared with the results on problems favoring stable parameterizations. Further, that it is difficult
to close the performance gaps with better parameter selection suggests that there is a range of reactivity or stability each favors. When parameterized beyond this range, the performance of each
learner degrades, or at least plateaus.
To further support this theory, we consider trends in ensemble sizes. Figure 2 shows the average
number of models in the ensembles of PBCMC and DWM at each time step on the STAGGER and
SEA concepts. These are again for the experiments reported in Table 1, and again we present 95%
confidence intervals at selected time steps for both. The figure shows that the trends in ensemble
sizes were roughly interchanged between the two learners on the two problems. On both problems,
one learner stayed within a relatively small range of ensemble sizes, whereas the other continued to
expand the ensemble when the concept was stable, only significantly pruning soon after drift. On
STAGGER , PBCMC expanded its ensemble size far more, whereas DWM did on SEA . This agrees
with our expectations for the synthetic concepts. STAGGER contains no noise, whereas SEA does,
which complements the designs of the two learners. When noise is more likely, DWM will update
7
70
-4
PBCMC, ? = 20, ? = 10-4
DWM, ? = 0.5, ? = 10
-4
PBCMC, ? = 10000, ? = 10-3
DWM, ? = 0.9, ? = 10
1400
60
1200
50
Ensemble size
Ensemble size
1000
40
30
800
600
20
400
10
200
0
0
0
20
40
60
Time Step (t)
80
100
120
0
(a)
12500
25000
Time Step (t)
37500
50000
(b)
Figure 2: Average numbers of models on (a) the STAGGER concepts and (b) the SEA concepts. See
text for details.
its ensemble more than when it is not as likely. However, when noise is more likely, PBCMC will
usually have difficulty preserving high weights for models which are actually useful. Conversely,
PBCMC regularly updates its ensemble, and DWM will have less difficulty maintaining high weights
on good models because it only decays weights by a constant factor.
Therefore, it seems that each learner reaches the boundary of its favored range of reactivity or
stability when further changes in that direction cause it to either be so reactive that it often assigns
relatively high probability of drift to many time steps for which there was no drift, or so stable that
it cannot react to actual drift. On STAGGER, PBCMC matched the performance of BNB on the first
target concept (not shown), whereas DWM made more mistakes as it reacted to erroneously inferred
drift. On SEA, PBCMC needs to be parameterized to be so stable that it cannot react quickly to drift.
5
Conclusion and Future Work
In this paper we presented a Bayesian approach to coping with concept drift. Empirical evaluations
supported our method. We showed that looking for changes in the conditional distribution p(Y |X)
led to better accuracy than looking for changes in the joint distribution p(Y, X). We also showed that
our Bayesian approach is competitive with one of the top ensemble methods for concept drift, DWM,
sometimes beating and sometimes losing to it. Finally, we explored why each method sometimes
outperforms the other. We showed that both PBCMC and DWM appear to favor a different range of
reactivity or stability.
Directions for future work include integrating the advantages of both PBCMC and DWM into a single
learner. Related to this task is a better characterization of their relative advantages and the relationships among them, their favored ranges of reactivity or stability, and the problems to which they are
applied. It also important to note that the more constrained ensemble sizes discussed above correspond to faster classification speeds. Future work could explore how to balance this desiderata with
the desire for better accuracy. Finally, another direction is to integrate a Bayesian approach with
other probabilistic models. With a useful probabilistic model for concept drift, such as ours, one
could potentially incorporate existing probabilistic domain knowledge to guide the search for drift
points or build broader models that use beliefs about drift to guide decision making.
Acknowledgments
The authors wish to thank the anonymous reviewers for their constructive feedback. The authors also
wish to thank Lise Getoor and the Department of Computer Science at the University of Maryland,
College Park. This work was supported by the Georgetown University Undergraduate Research
Opportunities Program.
8
References
[1] J. C. Schlimmer and R. H. Granger. Beyond incremental processing: Tracking concept drift. In
Proceedings of the Fifth National Conference on Artificial Intelligence, pages 502?507, Menlo
Park, CA, 1986. AAAI Press.
[2] G. Hulten, L. Spencer, and P. Domingos. Mining time-changing data streams. In Proceedings
of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, pages 97?106, New York, NY, 2001. ACM Press.
[3] G. Widmer and M. Kubat. Learning in the presence of concept drift and hidden contexts.
Machine Learning, 23:69?101, 1996.
[4] S. H. Bach and M. A. Maloof. Paired learners for concept drift. In Proceedings of the Eighth
IEEE International Conference on Data Mining, pages 23?32, Los Alamitos, CA, 2008. IEEE
Press.
[5] J. Z. Kolter and M. A. Maloof. Dynamic weighted majority: An ensemble method for drifting
concepts. Journal of Machine Learning Research, 8:2755?2790, Dec 2007.
[6] J. Z. Kolter and M. A. Maloof. Using additive expert ensembles to cope with concept drift.
In Proceedings of the Twenty-second International Conference on Machine Learning, pages
449?456, New York, NY, 2005. ACM Press.
[7] H. Wang, W. Fan, P. S. Yu, and J. Han. Mining concept-drifting data streams using ensemble
classifiers. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, pages 226?235, New York, NY, 2003. ACM Press.
[8] W. N. Street and Y. Kim. A streaming ensemble algorithm (SEA) for large-scale classification. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, pages 377?382, New York, NY, 2001. ACM Press.
[9] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, Berlin-Heidelberg, 2006.
[10] D. Barry and J. A. Hartigan. A Bayesian analysis for change point problems. Journal of the
American Statistical Association, 88(421):309?319, 1993.
[11] D. Barry and J. A. Hartigan. Product partition models for change point problems. The Annals
of Statistics, 20(1):260?279, 1992.
[12] Paul Fearnhead. Exact and efficient Bayesian inference for multiple changepoint problems.
Statistics and Computing, 16(2):203?213, 2006.
[13] P. Fearnhead and Z. Liu. On-line inference for multiple changepoint problems. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 69(4):589?605, September 2007.
[14] R.P. Adams and D.J.C. MacKay. Bayesian online changepoint detection. Technical report, University of Cambridge, 2007. http://www.inference.phy.cam.ac.uk/rpa23/papers/rpachangepoint.pdf.
[15] A. Blum. Empirical support for winnow and weighted-majority algorithms: Results on a
calendar scheduling domain. Machine Learning, 26:5?23, 1997.
[16] T. M. Mitchell, R. Caruana, D. Freitag, J. McDermott, and D. Zabowski. Experience with a
learning personal assistant. Communications of the ACM, 37(7):80?91, July 1994.
[17] M. Harries, C. Sammut, and K. Horn.
32(2):101?126, 1998.
Extracting hidden context.
9
Machine Learning,
| 4129 |@word trial:3 version:1 seems:1 simulation:2 tried:1 phy:1 liu:2 series:3 contains:1 ours:1 past:1 outperforms:1 existing:1 comparing:1 must:3 readily:1 additive:1 partition:2 shape:2 remove:2 update:5 resampling:1 alone:1 generative:1 selected:2 intelligence:1 parameterization:6 detecting:1 parameterizations:6 characterization:1 location:9 preference:1 hyperplanes:1 five:1 direct:1 incorrect:5 consists:2 acheived:1 wale:1 freitag:1 roughly:1 nor:1 actual:1 matched:2 mass:3 medium:2 kubat:1 what:2 developed:1 temporal:1 every:4 wrong:1 classifier:1 uk:1 control:1 maloof:5 appear:1 t1:1 before:1 mistake:2 reacting:1 approximately:1 might:1 dynamically:1 suggests:1 conversely:1 limited:1 stratified:1 range:6 acknowledgment:1 horn:1 testing:3 practice:1 block:4 x3:1 area:2 coping:2 empirical:4 significantly:1 matching:1 word:3 confidence:5 refers:1 integrating:1 symbolic:1 cannot:2 unlabeled:3 selection:3 close:3 scheduling:2 context:2 www:1 reviewer:1 yt:2 maximizing:1 straightforward:1 go:1 starting:2 duration:8 identifying:1 assigns:2 react:5 insight:2 rule:1 continued:1 d1:11 stability:7 updated:1 annals:1 target:13 user:3 exact:2 losing:1 us:4 hypothesis:1 domingo:1 element:1 trend:3 approximated:1 expensive:1 updating:1 continues:1 recognition:1 predicts:1 labeled:1 observed:2 negligibly:1 wang:1 region:1 trade:1 highest:1 principled:1 govern:1 complexity:1 cam:1 dynamic:3 personal:2 trained:7 segment:1 predictive:5 learner:24 completely:1 triangle:1 easily:1 joint:6 train:1 describe:4 artificial:1 approached:1 outcome:2 heuristic:1 otherwise:2 calendar:2 ability:1 statistic:3 favor:6 noisy:1 reactivity:7 online:11 sequence:3 advantage:2 propose:2 product:2 poorly:1 achieve:3 validate:1 los:1 convergence:2 sea:19 adam:4 perfect:1 incremental:1 ac:1 measured:2 received:1 dividing:1 c:1 predicted:1 indicate:1 direction:3 closely:1 correct:4 attribute:14 require:2 assign:4 stayed:1 generalization:1 anonymous:1 proposition:1 summation:1 spencer:1 considered:1 normal:2 mapping:1 predict:3 week:6 claim:1 changepoint:3 interchanged:1 consecutive:2 estimation:1 assistant:1 outperformed:2 label:9 currently:1 agrees:1 rpa23:1 weighted:6 stagger:15 hope:1 offs:1 fearnhead:5 modified:2 hulten:1 broader:2 l0:1 focus:1 lise:1 longest:1 improvement:1 likelihood:2 indicates:1 sigkdd:3 kim:1 inference:4 streaming:1 entire:1 initially:2 hidden:2 favoring:2 expand:1 reacted:2 interested:1 overall:5 classification:10 among:2 favored:2 multiplies:1 constrained:1 mackay:4 marginal:2 field:1 once:1 saving:1 washington:1 sampling:2 bmc:18 identical:1 park:2 look:2 yu:1 alter:1 future:6 t2:1 others:1 report:1 randomly:2 gamma:1 national:1 replaced:1 maintain:1 detection:7 mining:6 evaluation:1 schlimmer:2 accurate:2 experience:1 initialized:2 circle:2 classify:2 goodness:1 caruana:1 electricity:11 cost:3 subset:4 predictor:1 conducted:4 seventh:2 too:1 reported:3 synthetic:1 combined:1 density:1 international:5 probabilistic:10 told:1 quickly:4 again:3 aaai:1 expert:1 american:1 doubt:1 potential:1 student:1 rescales:1 kolter:2 stream:2 performed:6 closed:1 analyze:1 red:2 start:4 bayes:5 maintains:1 decaying:2 competitive:1 accuracy:24 ensemble:25 correspond:2 identify:1 bayesian:19 accurately:3 substring:4 researcher:5 classified:1 plateau:1 reach:1 associated:2 gain:1 mitchell:1 color:3 cap:8 knowledge:4 segmentation:1 sophisticated:1 actually:1 higher:4 dt:8 day:7 methodology:1 improved:2 formulation:1 evaluated:1 though:1 just:1 overfit:1 reweighting:1 perhaps:1 usa:1 name:1 hypothesized:2 concept:71 y2:1 normalized:2 effect:2 hence:1 assigned:2 conditionally:1 adjacent:1 widmer:1 numerator:1 during:1 auc:7 maintained:2 pdf:1 presenting:1 meaning:1 consideration:2 common:1 superior:2 multinomial:1 discussed:1 occurred:1 association:1 interpret:1 refer:1 significant:1 cambridge:1 particle:1 closing:1 had:2 stable:10 han:1 etc:1 base:3 add:1 posterior:6 own:1 recent:3 showed:4 winnow:1 irrelevant:1 discard:1 certain:1 binary:2 outperforming:1 came:1 meeting:1 mcdermott:1 preserving:1 minimum:1 additional:1 greater:1 performer:2 prune:1 maximize:1 period:1 barry:3 july:1 stephen:1 multiple:3 technical:1 faster:2 bach:3 offer:2 cross:1 paired:1 calculates:1 prediction:9 desideratum:1 denominator:1 expectation:1 sometimes:4 achieved:1 dec:1 penalize:1 whereas:6 want:1 interval:6 grow:2 crucial:1 dwm:37 eliminates:1 south:1 december:1 regularly:2 call:3 extracting:1 noting:1 presence:1 identically:6 enough:1 fit:2 wasn:1 whether:3 york:4 cause:1 dramatically:1 generally:5 useful:3 clear:2 amount:2 generate:1 http:1 outperform:1 exist:1 correctly:1 per:1 blue:1 discrete:3 four:6 threshold:2 blum:1 hartigan:3 changing:1 neither:1 rectangle:1 parameterized:2 uncertainty:2 place:2 almost:3 decision:1 resampled:1 followed:3 fold:1 fan:1 yielded:1 n3:1 x2:1 erroneously:1 aspect:1 speed:4 concluding:1 pruned:3 attempting:1 performing:2 expanded:1 relatively:2 transferred:1 department:2 according:1 combination:2 describes:1 remain:1 across:1 partitioned:2 making:2 computationally:1 equation:3 previously:1 eventually:1 granger:1 needed:1 merit:1 available:1 apply:1 victoria:2 gam:1 appropriate:1 apprentice:1 alternative:1 rebuilt:1 slower:1 existence:1 drifting:2 top:6 dirichlet:2 ensure:1 include:1 opportunity:1 maintaining:2 build:1 society:1 objective:2 question:1 alamitos:1 looked:1 occurs:1 strategy:2 primary:1 degrades:1 usual:1 said:2 september:1 separate:1 thank:2 maryland:1 berlin:1 majority:5 street:1 considers:1 collected:1 reason:3 marcus:1 length:2 trapezoid:1 relationship:1 balance:1 zabowski:1 difficult:4 potentially:1 implementation:2 design:2 unknown:2 perform:4 twenty:1 observation:18 markov:2 incorrectly:1 incorporated:1 looking:6 communication:1 dc:1 y1:9 incorporate:1 ninth:1 community:1 drift:50 inferred:1 introduced:1 complement:1 pair:1 cast:1 specified:2 learned:1 distinction:1 address:2 beyond:2 unhelpful:1 usually:2 below:1 beating:1 eighth:1 pattern:1 program:1 green:2 royal:1 belief:3 shifting:1 overlap:1 getoor:1 difficulty:2 scheme:1 improve:2 naive:2 text:2 prior:8 literature:1 discovery:3 georgetown:3 marginalizing:2 relative:3 rameters:1 filtering:1 versus:1 validation:1 integrate:1 degree:3 unpruned:1 principle:1 classifying:1 share:1 normalizes:1 sammut:1 placed:3 last:1 supported:3 soon:1 offline:1 guide:2 understand:1 taking:1 fifth:1 benefit:1 distributed:5 feedback:1 curve:2 calculated:2 boundary:1 ending:1 avoids:1 unchecked:1 computes:1 numeric:2 qn:1 made:2 valid:1 evaluating:1 author:2 far:2 cope:3 approximate:2 pruning:2 keep:1 global:1 active:6 assumed:2 xi:6 continuous:2 latent:1 search:2 why:2 table:6 learn:3 transfer:1 ca:2 menlo:1 contributes:1 heidelberg:1 domain:2 did:9 main:2 linearly:1 noise:3 paul:1 n2:2 fair:1 x1:9 representative:1 ny:4 precision:1 wish:4 candidate:2 third:1 weighting:1 theorem:2 minute:2 down:1 xt:1 bishop:1 explored:1 decay:1 exists:1 undergraduate:1 adding:2 demand:3 gap:3 lt:50 led:2 simply:5 likely:3 explore:1 desire:1 ordered:1 tracking:1 partially:1 springer:1 chance:1 determines:1 acm:8 conditional:9 goal:1 sized:1 price:1 change:31 included:1 determined:1 except:1 hyperplane:1 total:1 experimental:3 vote:1 exception:1 college:1 searched:1 support:2 reactive:8 bnb:17 constructive:1 evaluate:3 tested:7 |
3,456 | 413 | An Analog VLSI Chip for Finding Edges
from Zero-crossings
Wyeth Bair
Christof Koch
Computation and Neural Systems Program
Caltech 216-76
Pasadena, CA 91125
Abstract
We have designed and tested a one-dimensional 64 pixel, analog CMOS
VLSI chip which localizes intensity edges in real-time. This device exploits
on-chip photoreceptors and the natural filtering properties of resistive networks to implement a scheme similar to and motivated by the Difference
of Gaussians (DOG) operator proposed by Marr and Hildreth (1980). Our
chip computes the zero-crossings associated with the difference of two exponential weighting functions. If the derivative across this zero-crossing
is above a threshold, an edge is reported. Simulations indicate that this
technique will extend well to two dimensions.
1
INTRODUCTION
The zero-crossings of the Laplacian of the Gaussian,V 2 G, are often used for detecting edges. Marr and Hildreth (1980) argued that the Mexican-hat shape of the V 2 G
operator can be approximated by the difference of two Gaussians (DOG). In this
spirit, we have built a chip that takes the difference of two resistive-network smoothings of photoreceptor input and finds the resulting zero-crossings. The Green's function of the resistive network, a symmetrical decaying exponential, differs from the
Gaussian filter. Figure 1 shows the "Mexican-hat" shape of the DOG superimposed
on the "witch-hat" shape of the difference of exponentials (DOE) filter implemented
by our chip.
This implementation has the particular advantage of exploiting the smoothing operation performed by a linear resistive network, shown in Figure 2. In such a network,
data voltages d are applied to the nodes along the network via conductances G, and
the nodes are connected by resistances R. Following Kirchhoff's laws, the network
399
400
Bair and Koch
node voltages v settle to values such that power dissipation is minimized . One
may think of the network node voltages v as the convolution of the input with the
symmetrical decaying exponential filter function. The characteristic length of this
filter function is approximately 1/"RG, where G is the data conductance and R
the network resistance.
--Figure 1: The Mexican-hat shape of the difference of Gaussians (dotted) and the
witch-hat shape of the filter implemented by our chip.
Such a network is easily implemented in silicon and avoids the burden of additional
circuitry which others have used to implement Gaussian kernels. Our simulations
with digitized camera images show only minor differences between the zero-crossings
from the DOE filter and those from the DOG.
d i- 1
G
G
G
Figure 2: I-D resistive network.
2
ANALOG VLSI IMPLEMENTATION
This chip was implemented with a 2.0J..lm CMOS n-well process available through the
MOSIS silicon foundry. Intensity edges are detected using four stages of circuitry:
photoreceptors capture incoming light, a pair of I-D resistive networks smooth the
input image, transconductance amplifiers subtract the smoothed images, and digital
circuitry detects zero-crossings. Figures 3 and 4 show block diagrams for two pixels
of the 64 pixel chip.
An Analog VLSI Chip for Finding Edges from Zero-crossings
VP. 1
\
VP.
1
~
V R1
Vl i
V2.
1
Vli+ 1
~
V R1
V2.1+ 1
~
~
V R2
V R2
Figure 3: Block circuit diagram for two of 64 pixels as described in Section 2.
401
402
Bair and Koch
Processing begins at a line of photoreceptors spaced 100pm apart which encode the
logarithm of light intensity as a voltage V P, shown in Figure 3. The set of voltages from the photoreceptors are reported to corresponding nodes of two resistive
networks via transconductance amplifiers connected as followers. The followers'
voltage biases, VGI and VG2, can be adjusted off-chip to independently set the data
conductances for each resistive network. The network resistors are implemented
as Mead's saturating resistors (Mead, 1989). Voltage biases VRI and VR2 allow
independent off-chip adjustment of the two network resistances. The data conductance and network resistance values determine the space constant of the smoothing
filter which each network implements. The sets of voltages VI and V2, shown in
Figure 3, represent the two filtered versions of the image. Wide-range transconductance amplifiers (Mead, 1989) produce currents, I, proportional to the difference
Vl- V2.
-Figure 4: Zero-crossing detection and threshold circuitry.
Figure 4 shows the final stage of processing which detects zero-crossings in the
sequence of currents I and implements a threshold on the slope of those zerocrossings. Currents Ii and Ii+l charge or discharge the inputs of an exclusive
OR gate. The output of this gate is the first input to a NAND gate which is
used to implement the threshold. A current proportional to the magnitude of the
difference Ii - I i +l charges the second input of the NAND gate, while a threshold
current discharges this input. If the charging current, representing the slope of the
zero-crossing, is greater than the threshold current set off-chip by the bias voltage
V,hruh, this NAND input is charged to logical 1, otherwise, this input is discharged
to logical O. The output of the NAND gate, V Zi indicates the presence, logical 0,
or the absence, logical 1, of a zero-crossing with slope greater than Ithruh.
A final stage of circuitry is used to multiplex the sequence of 63 bits, V Z, and
corresponding currents Ii - I i +l indicating the slope of the zero-crossings.
An Analog VLSI Chip for Finding Edges from Zero-crossings
3
BEHAVIOR
We tested the behavior of the chip by placing a small lens above the silicon wafer
to focus an image onto the array of photoreceptors. The input light profile that we
used is shown in Figure 5a. Figure 5 b is an oscilloscope trace showing the smoothed
voltages (VI and V2 of Figure 3) corresponding to the filtered versions of the image.
The difference of these two smoothed voltage traces is shown in Figure 5c. Arrows
indicate the locations of two zero-crossings which the chip reports at the output.
The reported zero-crossings accurately localize the positions of the edges in the
image. The trace in Figure 5c crosses zero at other locations, but zero-crossings
with slope less than the adjustable threshold are masked by the circuitry shown in
Figure 4. This allows for noise and imperfections in the circuitry and can be used
to filter out weaker edges which are not relevant to the application.
Figure tj shows the response when two fingers are held one meter from the lens
and swept across the field of view. The fingers appear as bright regions against a
darker background. The chip accurately localizes the four edges (two per finger) as
indicated by the pulses below each voltage trace. As the fingers move quickly back
and forth across the field of view, the image and the zero-crossings follow the object
with no perceived delay. The measured response time of the chip to the appearance
of a detectable discontinuity in light intensity varies from about 100j.lsec in bright
indoor illumination to about 10msec in a dark room. The time constant is longer
for lower illumination due to the design of the logarithmic photoreceptor (Mead,
1989).
The chip has been proven to be a reliable and robust edge detector through its
use in two systems. It provides data for a system designed at the Hughes Aircraft
Artificial Intelligence Center which tracks edges and reports their velocities at over
300Hz. Also, we have built a hand-held battery powered device which displays the
locations of edges on a bank of 63 LEDs. This device accurately detects edges
in many different environments, ranging from a dimly lit room to bright outdoor
sunlight.
4
SIMULATIONS OF A 2-D VERSION
We have used a computer simulation of rectangular networks of ideal linear resistors
to test the extension of this technique in two dimensions. Results indicate that the
zero-crossings from the difference of two symmetrical exponential filters are qualitatively similar to those from the DOG. Figure 7 compares the zero-crossing from
a difference of Gaussians filter (left) to those from a difference of resistive networks
filter (right). For the DOG, a Gaussian of u = 1.25 pixels is subtracted from a Gaus0.75 pixels. For the resistive networks, a filter of characteristic length
sian of u
1 was subtracted from one with characteristic length 1/V2. Weaker zero-crossings
are masked from both output images by thresholding on the slope to emphasize
comparison of the stronger edges.
=
403
404
Bair and Koch
(a) _ _ _ _ _1
I
(b)
Figure 5: Chip response to a light bar stimulus.
Figure 6: Chip response to two moving stimuli.
An Analog VLSI Chip for Finding Edges from Zero-crossings
,
i
"
,,"
,
Figure 7: Zero--crossings from the difference of two Gaussians (left) and similar
output from a difference of decaying exponentials (right) ,
5
CONCLUSION
Our analog VLSI chip shows that finding the thresholded zero--crossings of the
difference of exponential filters is a robust technique for localizing intensity edges in
real-time. The robust behavior of the chip in systems to track edges and determine
velocity demonstrates the usefulness of implementing simple algorithms in analog
VLSI and the advantages of avoiding large, more general digital systems for these
purposes.
Acknowledgements
Many thanks to Carver Mead. Our laboratory is partially supported by grants
from the Office of Naval Research, the Rockwell International Science Center and
the Hughes Aircraft Artificial Intelligence Center. Wyeth Bair is supported by a
National Science Foundation Graduate Fellowship. Thanks also to Steve DeWeerth
and John Harris.
References
Marr, D. and Hildreth, E.C. (1980) Theory of edge detection. Proc. Roy, Soc.
Lond. B 207:187-217.
Mead, C.A. (1989) Analog VLSI and Neural Systems. Addison-Wesley: Reading,
MA.
405
| 413 |@word aircraft:2 soc:1 implemented:5 version:3 indicate:3 vgi:1 stronger:1 move:1 laboratory:1 filter:13 pulse:1 simulation:4 exclusive:1 settle:1 implementing:1 argued:1 zerocrossings:1 adjusted:1 extension:1 vg2:1 dissipation:1 current:8 length:3 koch:4 image:9 ranging:1 follower:2 john:1 lm:1 circuitry:7 witch:2 shape:5 trace:4 designed:2 purpose:1 perceived:1 analog:9 proc:1 extend:1 intelligence:2 implementation:2 device:3 design:1 silicon:3 adjustable:1 convolution:1 pm:1 filtered:2 provides:1 detecting:1 node:5 location:3 imperfection:1 gaussian:4 digitized:1 moving:1 smoothed:3 longer:1 along:1 voltage:12 intensity:5 office:1 vri:1 encode:1 dog:6 resistive:10 focus:1 naval:1 pair:1 apart:1 superimposed:1 indicates:1 sunlight:1 behavior:3 oscilloscope:1 discontinuity:1 swept:1 caltech:1 bar:1 below:1 detects:3 vl:2 additional:1 nand:4 greater:2 pasadena:1 vlsi:9 determine:2 indoor:1 reading:1 program:1 begin:1 ii:4 built:2 pixel:6 circuit:1 green:1 reliable:1 charging:1 power:1 smooth:1 natural:1 smoothing:2 cross:1 sian:1 localizes:2 field:2 finding:5 representing:1 scheme:1 laplacian:1 placing:1 lit:1 charge:2 minimized:1 demonstrates:1 others:1 report:2 stimulus:2 kernel:1 grant:1 indicated:1 christof:1 appear:1 represent:1 background:1 national:1 fellowship:1 acknowledgement:1 meter:1 multiplex:1 diagram:2 powered:1 law:1 filtering:1 proportional:2 mead:6 proven:1 amplifier:3 conductance:4 detection:2 approximately:1 hz:1 digital:2 foundation:1 spirit:1 thresholding:1 bank:1 presence:1 light:5 ideal:1 range:1 graduate:1 tj:1 held:2 camera:1 zi:1 supported:2 edge:19 hughes:2 block:2 implement:5 differs:1 bias:3 allow:1 weaker:2 wide:1 carver:1 logarithm:1 motivated:1 bair:5 dimension:2 avoids:1 foundry:1 computes:1 qualitatively:1 resistance:4 onto:1 localizing:1 operator:2 vr2:1 emphasize:1 masked:2 charged:1 center:3 usefulness:1 delay:1 rockwell:1 dark:1 incoming:1 independently:1 rectangular:1 reported:3 photoreceptors:5 symmetrical:3 varies:1 thanks:2 array:1 international:1 marr:3 dotted:1 per:1 track:2 dimly:1 off:3 robust:3 ca:1 discharge:2 quickly:1 wafer:1 four:2 threshold:7 localize:1 crossing:24 velocity:2 approximated:1 roy:1 thresholded:1 derivative:1 arrow:1 mosis:1 noise:1 profile:1 capture:1 region:1 connected:2 darker:1 vi:2 position:1 performed:1 view:2 msec:1 resistor:3 exponential:7 wyeth:2 environment:1 bit:1 outdoor:1 decaying:3 weighting:1 battery:1 display:1 slope:6 bright:3 showing:1 characteristic:3 r2:2 spaced:1 discharged:1 easily:1 vp:2 kirchhoff:1 chip:24 lsec:1 lond:1 accurately:3 finger:4 transconductance:3 burden:1 magnitude:1 illumination:2 detected:1 artificial:2 detector:1 subtract:1 rg:1 logarithmic:1 led:1 across:3 appearance:1 against:1 vli:1 otherwise:1 saturating:1 adjustment:1 partially:1 associated:1 think:1 final:2 harris:1 advantage:2 sequence:2 logical:4 ma:1 detectable:1 addison:1 back:1 room:2 relevant:1 wesley:1 steve:1 absence:1 available:1 gaussians:5 follow:1 operation:1 response:4 v2:6 forth:1 mexican:3 lens:2 stage:3 subtracted:2 photoreceptor:2 exploiting:1 deweerth:1 hand:1 gate:5 r1:2 hat:5 produce:1 indicating:1 cmos:2 object:1 hildreth:3 measured:1 minor:1 exploit:1 tested:2 avoiding:1 |
3,457 | 4,130 | Implicit encoding of prior probabilities
in optimal neural populations
Deep Ganguli and Eero P. Simoncelli
Howard Hughes Medical Institute, and
Center for Neural Science
New York University
New York, NY 10003
{dganguli,eero}@cns.nyu.edu
Optimal coding provides a guiding principle for understanding the representation
of sensory variables in neural populations. Here we consider the influence of a
prior probability distribution over sensory variables on the optimal allocation of
neurons and spikes in a population. We model the spikes of each cell as samples
from an independent Poisson process with rate governed by an associated tuning
curve. For this response model, we approximate the Fisher information in terms
of the density and amplitude of the tuning curves, under the assumption that tuning width varies inversely with cell density. We consider a family of objective
functions based on the expected value, over the sensory prior, of a functional of
the Fisher information. This family includes lower bounds on mutual information
and perceptual discriminability as special cases. In all cases, we find a closed
form expression for the optimum, in which the density and gain of the cells in
the population are power law functions of the stimulus prior. This also implies
a power law relationship between the prior and perceptual discriminability. We
show preliminary evidence that the theory successfully predicts the relationship
between empirically measured stimulus priors, physiologically measured neural
response properties (cell density, tuning widths, and firing rates), and psychophysically measured discrimination thresholds.
1
Introduction
Many bottom up theories of neural encoding posit that sensory systems are optimized to represent sensory information, subject to limitations of noise and resources (e.g., number of neurons,
metabolic cost, wiring length). It is difficult to test this concept because optimization of any formulation that attempts to correctly incorporate all of the relevant ingredients is generally intractable. A
substantial literature has considered population models in which each neuron?s mean response to a
scalar variable is characterized by a tuning curve [e.g., 1?6]. For these simplified models, several
papers have examined the optimization of Fisher information, as a bound on mean squared error
[7?10]. In these results, the distribution of sensory variables is assumed to be uniform and the populations are assumed to be homogeneous with regard to tuning curve shape, spacing, and amplitude.
The distribution of sensory variables encountered in the environment is often non-uniform, and it is
thus of interest to understand how variations in probability affect the design of optimal populations.
It would seem natural that a neural system should devote more resources to regions of sensory space
that occur with higher probability, analogous to results in coding theory [11]. At the single neuron
level, several publications describe solutions in which monotonic neural response functions allocate
greater dynamic range to higher probability stimuli [12?15]. At the population level, non-uniform
allocations of neurons with identical tuning curves have been shown to be optimal for non-uniform
stimulus distributions [16, 17].
Here, we examine the influence of a sensory prior on the optimal allocation of neurons and spikes
in a population, and the implications of this optimal allocation for subsequent perception. Given
a prior distribution over a scalar stimulus parameter, and a resource budget of N neurons with an
average of R spikes/sec for the entire population, we seek the optimal shapes, positions, and amplitudes of tuning curves. We assume a population with independent Poisson spiking, and consider
a family of objective functions based on Fisher information. We then approximate the Fisher information in terms of two continuous resource variables, the density and gain of the tuning curves.
This approximation allows us to obtain a closed form solution for the optimal population. For all
objective functions, we find that the optimal tuning curve properties (cell density, tuning width, and
gain) are power-law functions of the stimulus prior, with exponents dependent on the specific choice
of objective function. Through the Fisher information, we also derive a bound on perceptual discriminability, again in the form a power-law of the stimulus prior. Thus, our framework provides
direct and experimentally testable links between sensory priors, properties of the neural representation, and perceptual discriminability. We provide preliminary evidence that these relationships are
supported by experimental data.
2
Encoding model
We assume a conventional model for a population of N neurons responding to a single scalar variable, s [1?6]. The number of spikes emitted (per unit time) by the nth neuron is a sample from
an independent Poisson process, with mean rate determined by its tuning function, hn (s). The
probability density of the population response can be written as
p(~r|s) =
N
Y
hn (s)rn e?hn (s)
.
rn !
n=1
We also assume the total expected spike rate, R, of the population is fixed, which places a constraint
on the tuning curves:
Z
p(s)
N
X
hn (s) ds = R,
(1)
n=1
where p(s) is the probability distribution of stimuli in the environment. We refer to this as a sensory
prior, in anticipation of its future use in Bayesian decoding of the population response.
3
Objective function
We now ask: what is the best way to represent values drawn from p(s) given the limited resources
of N neurons and R total spikes? To formulate a family of objective functions which depend on
both p(s), and the tuning curves, we first rely on Fisher information, If (s), which can be written as
a function of the tuning curves [1, 18]:
Z
?2
If (s) = ? p(~r|s) 2 log p(~r|s) d~r
?s
N
?2
X
hn (s)
.
=
h (s)
n=1 n
The Fisher information can be used to express lower bounds on mutual information [16], the variance of an unbiased estimator [18], and perceptual discriminability [19]. Specifically, the mutual
information, I(~r; s), is bounded by:
Z
2?e
1
p(s) log
ds,
(2)
I(~r; s) ? H(s) ?
2
If (s)
where H(s) is the entropy, or amount of information inherent in p(s), which is independent of the
neural population. The Cramer-Rao inequality allows us to express the minimum expected squared
2
stimulus discriminability achievable by any decoder1 :
Z
p(s)
ds.
? 2 ? ?2
If (s)
(3)
The constant ? determines the performance level at threshold in a discrimination task.
We formulate a generalized objective function that includes the Fisher bounds on information and
discriminability as special cases:
!
Z
Z
N
N
X
X
h?2
n (s)
hn (s) ds = R,
(4)
ds,
s.t.
p(s)
arg max
p(s) f
h (s)
hn (s)
n=1
n=1 n
where f (?) is either the natural logarithm, or a power function. When f (x) = log(x), optimizing
Eq. (4) is equivalent to maximizing the lower bound on mutual information given in Eq. (2). We
refer to this as the infomax objective function. Otherwise, we assume f (x) = x? , for some exponent
?. Optimizing Eq. (4) with ? = ?1 is equivalent to minimizing the squared discriminability bound
expressed in Eq. (3). We refer to this as the discrimax objective function.
4
How to optimize?
The objective function expressed in Eq. (4) is difficult to optimize because it is non-convex. To
make the problem tractable, we first introduce a parametrization of the population in terms of cell
density and gain. The cell density controls both the spacing and width of the tuning curves, and the
gain controls their maximum average firing rates. Second, we show that Fisher information can be
closely approximated as a continuous function of density and gain. Finally, re-writing the objective
function and constraints in these terms allows us to obtain closed-form solutions for the optimal
tuning curves.
4.1
Density and gain for a homogeneous population
If p(s) is uniform, then by symmetry, the Fisher information for an optimal neural population should
also be uniform. We assume a convolutional population of tuning curves, evenly spaced on the unit
lattice, such that they approximately ?tile? the space:
N
X
h(s ? n) ? 1.
n=1
We also assume that this population has an approximately constant Fisher information:
N
X
h?2 (s ? n)
If (s) =
h(s ? n)
n=1
=
N
X
?(s ? n) ? Iconv .
(5)
n=1
That is, we assume that the Fisher information curves for the individual neurons, ?(s ? n), also
tile the stimulus space. The value of the constant, Iconv , is dependent on the details of the tuning
curve shape, h(s), which we leave unspecified. As an example, Fig. 1(a-b) shows that the Fisher
information for a convolutional population of Gaussian tuning curves, with appropriate width, is
approximately constant.
Now we introduce two scalar values, a gain (g), and a density (d), that affect the convolutional
population as follows:
n
(6)
hn (s) = g h d(s ? ) .
d
1
The conventional Cramer-Rao bound expresses the minimum mean squared error of any estimator, and in
general requires a correction for the estimator bias [18]. Here, we use it to bound the squared discriminability
of the estimator, as expressed in the stimulus space, which is independent of bias [19].
3
(b)
(a)
(c)
7
actual
approximate
8
6
5
4
3
2
1
(d)
(e)
Firing rate
Firing rate
approximate
Fisher
information
Fisher
information
actual
Fig. 1. Construction of a heterogeneous population of neurons. (a) Homogeneous population with
Gaussian tuning curves on the unit lattice. The tuning width of ? = 0.55 is chosen so that the curves
approximately tile the stimulus space. (b) The Fisher information of the convolutional population
(green) is approximately constant. (c) Inset shows d(s), the tuning curve density. The cumulative
integral of this density, D(s), alters the positions and widths of the tuning curves in the convolutional
population. (d) The warped population, with tuning curve peaks (aligned with tick marks, at locations
sn = D?1 (n)), is scaled by the gain function, g(s) (blue). A single tuning curve is highlighted
(red) to illustrate the effect of the warping and scaling operations. (e) The Fisher information of the
inhomogeneous population is approximately proportional to d2 (s)g(s).
The gain modulates the maximum average firing rate of each neuron in the population. The density
controls both the spacing and width of the tuning curves: as the density increases, the tuning curves
become narrower, and are spaced closer together so as to maintain their tiling of stimulus space. The
effect of these two parameters on Fisher information is:
N (d)
If (s) = d2 g
X
?(ds ? n)
n=1
? d2 g Iconv .
The second line follows from the assumption of Eq. (5), that the Fisher information of the convolutional population is approximately constant with respect to s.
The total resources, N and R, naturally constrain d and g, respectively. If the original (unit-spacing)
convolutional population is supported on the interval (0, Q) of the stimulus space, then the number
of neurons in the modulated population must be N (d) = Qd to cover the same interval. Under
the assumption that the tuning curves tile the stimulus space, Eq. (1) implies that R = g for the
modulated population.
4.2
Density and gain for a heterogeneous population
Intuitively, if p(s) is non-uniform, the optimal Fisher information should also be non-uniform. This
can be achieved through inhomogeneities in either the tuning curve density or gain. We thus generalize density and gain to be continuous functions of the stimulus, d(s) and g(s), that warp and scale
the convolutional population:
hn (s) = g(sn ) h(D(s) ? n).
4
(7)
Infomax
Optimized function:
Density (Tuning width)?1
Gain
Fisher information
d(s)
g(s)
If (s)
Discriminability bound ?min (s)
f (x) = log x
Discrimax
f (x) = ?x
?1
1
N p(s)
N p 2 (s)
R
Rp
? RN 2 p2 (s)
? p?1 (s)
? 12
(s)
1
2
? RN 2 p (s)
1
? p? 4 (s)
General
f (x) = ?x? , ? <
Np
Rp
??1
3??1
2?
1?3?
1
3
(s)
(s)
2
? RN 2 p 1?3? (s)
1
? p 3??1 (s)
Table 1. Optimal heterogeneous population properties, for objective functions specified by Eq. (9).
Rs
Here, D(s) = ?? d(t)dt, the cumulative integral of d(s), warps the shape of the prototype tuning
curve. The value sn = D?1 (n) represents the preferred stimulus value of the (warped) nth tuning
curve (Fig. 1(b-d)). Note that the warped population retains the tiling properties of the original
convolutional population. As in the uniform case, the density controls both the spacing and width
of the tuning curves. This can be seen by rewriting Eq. (7) as a first-order Taylor expansion of D(s)
around sn :
hn (s) ? g(sn ) h(d(sn )(s ? sn )),
which is a generalization of Eq. (6).
We can now write the Fisher information of the heterogeneous population of neurons in Eq. (7) as
If (s) =
N
X
d2 (s) g(sn ) ?(D(s) ? n)
n=1
2
? d (s) g(s) Iconv .
(8)
In addition to assuming that the Fisher information is approximately constant (Eq. (5)), we have
also assumed that g(s) is smooth relative to the width of ?(D(s) ? n) for all n, so that we can
approximate g(sn ) as g(s) and remove it from the sum. The end result is an approximation of
Fisher information in terms of the continuous parameterization of cell density and gain. As earlier,
the constant Iconv is determined by the precise shape of the tuning curves.
As in the homogeneous case, the global resource values N and R will place constraints on d(s)
and g(s), respectively. In particular, we requireR that D(?) map the entire input space onto the range
[1, N ], and thus D(?) = N , or equivalently, d(s) ds = N . To attain the proper rate, we use the
fact that the warped tuning curves
R sum to unity (before multiplication by the gain function) and use
Eq. (1) to obtain the constraint p(s)g(s) ds = R.
4.3
Objective function and solution for a heterogeneous population
Approximating Fisher information as proportional to squared density and gain allows us to re-write
the objective function and resource constraints of Eq. (4) as
Z
Z
Z
2
arg max
p(s) f d (s) g(s) ds,
s.t.
d(s) ds = N, and
p(s)g(s) ds = R. (9)
d(s),g(s)
A closed-form optimum of this objective function is easily determined by taking the gradient of the
Lagrangian, setting to zero, and solving the resulting system of equations. Solutions are provided in
Table 1 for the infomax, discrimax, and general power cases.
In all cases, the solution specifies a power-law relationship between the prior, and the density and
gain of the tuning curves. In general, all solutions allocate more neurons, with correspondingly
narrower tuning curves, to higher-probability stimuli. In particular, the infomax solution allocates
an approximately equal amount of probability mass to each neuron. The shape of the optimal gain
function depends on the objective function: for ? < 0, neurons with lower firing rates are used
to represent stimuli with higher probabilities, and for ? > 0, neurons with higher firing rates are
used for stimuli with higher probabilities. Note also that the global resource values, N and R,
enter only as scale factors on the overall solution, allowing us to easily test the validity of the
5
(a)
Environment
(b)
Physiology
(c)
Threshold (deg)
# Cells
Probability
15
10
5
10
0
0
45
90
135
180
Perception
15
5
0
0
45
90
135
180
0
45
90
135
180
Orientation
(d) Infomax predictions
Probability
(e) Discrimax predictions
0
45
90
135
180
0
45
90
135
180
Orientation
Fig. 2. (a) Distribution of orientations averaged across three natural image databases [20?22]. (b)
Density, or total number of Macaque V1 cells tuned to each preferred orientation [23]. (c) Orientation
discrimination thresholds averaged across four human subjects [24]. (d & e) Infomax and discrimax
predictions of orientation distribution. Blue: prediction from cell density. Red: prediction from discrimination thresholds. Predictions were made by exponentiating the raw data with the appropriate
exponent from Table 1, then normalizing to integrate to one.
predicted relationships on experimental data. In addition to power-law relationships between tuning
properties and sensory priors, our formulation offers a direct relationship between the sensory prior
and perceptual discriminability. This can be obtained by substituting the optimal solutions for d(s)
and g(s) into Eq. (8),pand using the resulting Fisher information to bound the discriminability,
?(s) ? ?min (s) ? ?/ If (s) [19]. The resulting expressions are provided in Table 1.
5
Experimental evidence
Our framework predicts a quantitative link between the sensory prior, physiological parameters (the
density, tuning widths, and gain of cells), and psychophysically measured discrimination thresholds.
We obtained subsets of these quantities for two visual stimulus variables, orientation and spatial
frequency, both of believed to be encoded by cells in primary visual cortex (area V1). For each
variable, we use the infomax and discrimax solutions to convert the physiological and perceptual
measurements, using the appropriate exponents from Table 1, into predictions of the stimulus prior
p?(s). We then compare these predictions to the empirically measured prior p(s).
5.1
Orientation
We estimated the prior distribution of orientations in the environment by averaging orientation statistics across three natural image databases. Two databases consist entirely of natural scenes [20, 21],
and the third contains natural and manmade scenes [22]. Orientation statistics depend on scale, so
we measured statistics at a scale matching the psychophysical experiment from which we obtained
perceptual data. The average distribution of orientations exhibits higher probability at the cardinal
orientations (vertical and horizontal) than at the oblique orientations (Fig. 2(a)). Measurements of
cell density for a population of 79 orientation-tuned V1 cells in Macaque [23] show more cells tuned
to the cardinal orientations than the oblique orientations (Fig. 2(b)). Finally, perceptual discrimination thresholds, averaged across four human subjects [24] show a similar bias (Fig. 2(c)), with
humans better able to discriminate orientations near the cardinal directions.
All of the orientation data exhibit similar biases, but our theory makes precise and testable predictions about these relationships. If a neural population is designed to maximize information, then the
cell density and inverse discrimination thresholds should match the stimulus prior, as expressed in
infomax column of Table 1. We normalize these predictions to integrate to one (since the theory
6
Environment
(b)
Physiology
Log probability
100
24
tuning width
(cpd)
density
50 (# Cells)
12
0
0
5
10
15
Threshold (cpd)
(a)
0
0
1
2
3
4
5
6
(c)
Perception
0
5
3
2
1
0
7
10
15
Spatial Frequency (cpd)
(e) Discrimax predictions
Log probability
(d) Infomax predictions
0
5
10
15
0
5
10
15
Spatial Frequency (cpd)
Fig. 3. (a) Distribution of spatial frequencies computed across two natural image databases [20, 21].
(b) Cell density as a function of preferred spatial frequency for a population of 317 V1 cells [25,
28] Dark blue: average number of cells tuned to each spatial frequency. Light blue: average tuning
width. (c) Average spatial frequency discrimination thresholds. Dark red: thresholds obtained at
10% contrast averaged across 3 human subjects [26]. Light red: thresholds obtained at 25% contrast
averaged across 7-13 human subjects [27]. (d & e) Infomax and discrimax predictions of spatial
frequency distribution. Blues: predictions from cell density and tuning widths. Reds: predictions from
discrimination thresholds.
provides only the shapes of the functions, up to unknown values of the resource variables N and
R), and plot them against the measured prior (Fig. 2(d)). We see that the predictions arising from
cell density and discrimination thresholds are consistent with one another, and both are consistent
with the stimulus prior. This is especially remarkable given that the measurements come from very
different domains (in the case of the perceptual and physiological data, different species). For the
discrimax objective function, the exponents in the power-law relationships (expressed in Table 1)
are too small, resulting in poor qualitative agreement between the stimulus prior and predictions
from the physiology and perception (Fig. 2(e)). For example, predicting the prior from perceptual
data, under the discrimax objective function, requires exponentiating discrimination thresholds to
the fourth power, resulting in an over exaggeration of the cardinal bias.
5.2
Spatial frequency
We obtained a prior distribution over spatial frequencies averaged across two natural image
databases [20, 21]. For each image, we computed the magnitude spectrum, and averaged over orientation. We averaged these across images, and fit the result with a power law of exponent ?1.3
(Fig. 3(a)). We also obtained spatial frequency tuning properties for a population of 317 V1 cells
[25]. On average, we see there are more cells, with correspondingly narrower tuning widths, tuned
to low spatial frequencies (Fig. 3(b)). These data support the model assumption that tuning width
is inversely proportional to cell density. We also obtained average discrimination thresholds for sinusoidal gratings of different spatial frequencies from two studies (Fig. 3(c)). The gratings were
shown at 10% contrast to 3 human subjects for one study [26], and 25% contrast for 7 ? 13 human
subjects for the other [27]. The thresholds show that, on average, humans are better at discriminating
low spatial frequencies.
We again test the infomax and discrimax solutions by comparing predicted distributions obtained
from the physiological and perceptual data, to the measured prior. We normalize each prediction
to integrate to the corresponding area under the prior. The infomax case shows striking agreement
between the measured stimulus prior, and predictions based on the physiological and perceptual
measurements (Fig. 3(d)). However, as in the orientation case, discrimax predictions are poor (Fig.
3(e)), suggesting that information maximization provides a better optimality principle for explaining
the neural and perceptual encoding of spatial frequency than discrimination maximization.
7
6
Discussion
We have examined the influence sensory priors on the optimal allocation of neural resources, as
well as the influence of these optimized resources on subsequent perception. For a family of objective functions, we obtain closed-form solutions specifying power law relationships between the
probability distribution of a sensory variable encountered in the environment, the tuning properties
of a population that encodes that variable, and the minimum perceptual discrimination thresholds
achievable for that variable. We?ve shown preliminary supportive evidence for these relationships
for two different perceptual attributes.
Our analysis requires several approximations and assumptions in order to arrive at an analytical
solution. We first rely on lower bounds on mutual information and discriminability based on Fisher
information. Fisher information is known to provide a poor bound on mutual information when
there are a small number of neurons, a short decoding time, or non-smooth tuning curves [16, 29].
It also provides a poor bound on supra-threshold discriminability [30, 31]. But note that we do not
require the bounds on either information or discriminability to be tight, but rather that their optima
be close to that of their corresponding true objective functions. We also made several assumptions in
deriving our results: (1) the tuning curves, h(D(s)? n), evenly tile the stimulus space; (2) the single
neuron Fisher informations, ?(D(s) ? n), evenly tile the stimulus space; and (3) the gain function,
g(s), varies slowly and smoothly over the width of ?(D(s) ? n). These assumptions allow us to
approximate Fisher information in terms of cell density and gain (Fig. 1(e)), to express the resource
constraints in simple form, and to obtain a closed-form solution to the optimization problem.
Our framework offers an important generalization of the population coding literature, allowing for
non-uniformity of sensory priors, and corresponding heterogeneity in tuning and gain properties.
Nevertheless, it suffers from many of the same simplifications found in previous literature. First,
neural spike trains are not Poisson, and they are (at least in some cases) correlated [32]. Second,
tuning curve encoding models only specify neural responses to single stimulus values. The model
should be generalized to handle arbitrary combinations of stimuli. And third, the response model
should be generalized to handle multi-dimensional sensory inputs. Each of these limitations offers
an important opportunity for future work.
Finally, our encoding model has direct implications for Bayesian decoding, a problem that has received much attention in recent literature [e.g., 5, 6, 33?35]. A Bayesian decoder must have knowledge of prior probabilities, but it is unclear how such knowledge is obtained or represented in the
brain [34]. Previous studies assume that prior probabilities are either uniform [6], represented in
the spiking activity of a separate population of neurons [5], or represented (in sample form) in the
spontaneous activity [35]. Our encoding formulation provides a mechanism whereby the prior is implicitly encoded in the density and gains of tuning curves, which presumably arise from the strength
of synaptic connections. We are currently exploring the requirements for a decoder that can correctly
utilize this form of embedded prior information to obtain Bayesian estimates of stimulus variables.
References
[1] HS Seung and H Sompolinsky. Simple models for reading neuronal population codes. Proc. Natl. Acad.
Sci. U.S.A., 90:10749?10753, Nov 1993.
[2] RS Zemel, P Dayan, and A Pouget. Probabilistic interpretation of population codes. Neural Comput,
10(2):403?430, Feb 1998.
[3] A Pouget, P Dayan, and RS Zemel. Inference and computation with population codes. Annu Rev Neurosci,
26:381?410, 2003.
[4] TD Sanger. Neural population codes. Curr Opin Neurobiol, 13(2):238?249, Apr 2003.
[5] WJ Ma, JM Beck, PE Latham, and A Pouget. Bayesian inference with probabilistic population codes.
Nat Neurosci, 9(11):1432?1438, Nov 2006.
[6] M Jazayeri and JA Movshon. Optimal representation of sensory information by neural populations. Nat.
Neurosci., 9:690?696, May 2006.
[7] K Zhang and TJ Sejnowski. Neuronal tuning: To sharpen or broaden? Neural Comput, 11(1):75?84, Jan
1999.
[8] A Pouget, S Deneve, JC Ducom, and PE Latham. Narrow versus wide tuning curves: What?s best for a
population code? Neural Comput, 11(1):85?90, Jan 1999.
8
[9] WM Brown and A B?acker.
18(7):1511?1526, Jul 2006.
Optimal neuronal tuning for finite stimulus spaces.
Neural Comput,
[10] MA Montemurro and S Panzeri. Optimal tuning widths in population coding of periodic variables. Neural
Comput, 18(7):1555?1576, Jul 2006.
[11] A Gersho and RM Gray. Vector quantization and signal compression. Kluwer Academic Publishers,
Norwell, MA, 1991.
[12] S Laughlin. A simple coding procedure enhances a neuron?s information capacity. Z. Naturforschung,
36c:910?912, 1981.
[13] JP Nadal and N Parga. Nonlinear neurons in the low-noise limit: a factorial code maximizes information
transfer. Network: Computation in Neural Systems, 5:565?581(17), 1994.
[14] T von der Twer and DI MacLeod. Optimal nonlinear codes for the perception of natural colours. Network,
12(3):395?407, Aug 2001.
[15] MD McDonnell and NG Stocks. Maximally informative stimuli and tuning curves for sigmoidal ratecoding neurons and populations. Phys Rev Lett, 101:58103?58107, 2008.
[16] N Brunel and JP Nadal. Mutual information, Fisher information, and population coding. Neural Comput,
10(7):1731?1757, Oct 1998.
[17] NS Harper and D McAlpine. Optimal neural population coding of an auditory spatial cue. Nature,
430(7000):682?686, Aug 2004.
[18] D Cox and D Hinkley. Theoretical statistics. London: Chapman and Hall., 1974.
[19] P Seri?es, AA Stocker, and EP Simoncelli. Is the homunculus ?aware? of sensory adaptation? Neural
Comput, 21(12):3271?3304, Dec 2009.
[20] JH van Hateren and A van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc Biol Sci, 265(1394):359?366, Mar 1998.
[21] E Doi, T Inui, TW Lee, T Wachtler, and TJ Sejnowski. Spatiochromatic receptive field properties derived from information-theoretic analyses of cone mosaic responses to natural scenes. Neural Comput,
15(2):397?417, Feb 2003.
[22] A Olmos and FAA Kingdom. McGill calibrated image database, http://tabby.vision.mcgill.ca, 2004.
[23] RJ Mansfield. Neural basis of orientation perception in primate vision. Science, 186(4169):1133?1135,
Dec 1974.
[24] AR Girshick, MS Landy, and EP Simoncelli. Bayesian line orientation perception: Human prior expectations match natural image statistics. In Frontiers in Systems Neuroscience (CoSyNe)., 2010.
[25] JR Cavanaugh, W Bair, and JA Movshon. Selectivity and spatial distribution of signals from the receptive
field surround in macaque v1 neurons. J Neurophysiol, 88(5):2547?2556, Nov 2002.
[26] T Caelli, H Brettel, I Rentschler, and R Hilz. Discrimination thresholds in the two-dimensional spatial
frequency domain. Vision Res, 23(2):129?133, 1983.
[27] D Regan, S Bartol, TJ Murray, and KI Beverley. Spatial frequency discrimination in normal vision and in
patients with multiple sclerosis. Brain, 105 (Pt 4):735?754, Dec 1982.
[28] JR Cavanaugh, W Bair, and JA Movshon. Nature and interaction of signals from the receptive field center
and surround in macaque v1 neurons. J Neurophysiol, 88(5):2530?2546, Nov 2002.
[29] M Bethge, D Rotermund, and K Pawelzik. Optimal short-term population coding: when fisher information fails. Neural Comput, 14(10):2317?2351, Oct 2002.
[30] M Shamir and H Sompolinsky. Implications of neuronal diversity on population coding. Neural Comput,
18(8):1951?1986, Aug 2006.
[31] P Berens, S Gerwinn, A Ecker, and M Bethge. Neurometric function analysis of population codes. In
Advances in Neural Information Processing Systems 22, pages 90?98, 2009.
[32] E Zohary, MN Shadlen, and WT Newsome. Correlated neuronal discharge rate and its implications for
psychophysical performance. Nature, 370(6485):140?143, Jul 1994.
[33] DC Knill and A Pouget. The bayesian brain: the role of uncertainty in neural coding and computation.
Trends Neurosci, 27(12):712?719, Dec 2004.
[34] EP Simoncelli. Optimal estimation in sensory systems. In M Gazzaniga, editor, The Cognitive Neurosciences, IV, chapter 36, pages 525?535. MIT Press, Oct 2009.
[35] J Fiser, P Berkes, G Orb?an, and M Lengyel. Statistically optimal perception and learning: from behavior
to neural representations. Trends Cogn Sci, 14(3):119?130, Mar 2010.
9
| 4130 |@word h:1 cox:1 compression:1 achievable:2 d2:4 seek:1 r:3 contains:1 tuned:5 comparing:1 written:2 must:2 subsequent:2 informative:1 shape:7 opin:1 remove:1 designed:1 plot:1 discrimination:16 cue:1 parameterization:1 cavanaugh:2 parametrization:1 oblique:2 short:2 provides:6 location:1 sigmoidal:1 zhang:1 direct:3 become:1 qualitative:1 introduce:2 twer:1 expected:3 montemurro:1 behavior:1 examine:1 multi:1 brain:3 td:1 actual:2 jm:1 pawelzik:1 zohary:1 provided:2 bounded:1 maximizes:1 mass:1 what:2 unspecified:1 neurobiol:1 nadal:2 quantitative:1 scaled:1 rm:1 control:4 unit:4 medical:1 before:1 limit:1 acad:1 encoding:7 firing:7 approximately:9 discriminability:15 examined:2 specifying:1 limited:1 range:2 statistically:1 averaged:8 hughes:1 caelli:1 cogn:1 procedure:1 jan:2 area:2 attain:1 physiology:3 matching:1 anticipation:1 onto:1 close:1 influence:4 writing:1 optimize:2 conventional:2 equivalent:2 map:1 center:2 maximizing:1 lagrangian:1 ecker:1 attention:1 convex:1 formulate:2 pouget:5 estimator:4 deriving:1 population:63 handle:2 variation:1 analogous:1 discharge:1 mcgill:2 construction:1 spontaneous:1 pt:1 shamir:1 homogeneous:4 mosaic:1 agreement:2 trend:2 approximated:1 predicts:2 database:6 bottom:1 ep:3 role:1 region:1 wj:1 sompolinsky:2 substantial:1 environment:6 seung:1 dynamic:1 depend:2 solving:1 tight:1 uniformity:1 basis:1 neurophysiol:2 easily:2 stock:1 represented:3 chapter:1 train:1 describe:1 london:1 sejnowski:2 seri:1 doi:1 zemel:2 encoded:2 otherwise:1 statistic:5 highlighted:1 inhomogeneity:1 analytical:1 interaction:1 adaptation:1 relevant:1 aligned:1 normalize:2 optimum:3 supra:1 requirement:1 leave:1 derive:1 illustrate:1 measured:9 received:1 aug:3 grating:2 eq:15 p2:1 predicted:2 implies:2 come:1 qd:1 orb:1 direction:1 posit:1 inhomogeneous:1 closely:1 manmade:1 attribute:1 filter:1 human:9 require:1 ja:3 generalization:2 preliminary:3 exploring:1 frontier:1 correction:1 tabby:1 around:1 considered:1 cramer:2 hall:1 normal:1 presumably:1 panzeri:1 substituting:1 estimation:1 proc:2 currently:1 wachtler:1 successfully:1 mit:1 gaussian:2 rather:1 publication:1 derived:1 contrast:4 inference:2 ganguli:1 dependent:2 dayan:2 entire:2 arg:2 overall:1 orientation:23 exponent:6 spatial:19 special:2 mutual:7 schaaf:1 equal:1 aware:1 field:3 ng:1 chapman:1 identical:1 represents:1 future:2 np:1 stimulus:33 cpd:4 inherent:1 cardinal:4 ve:1 individual:1 beck:1 cns:1 maintain:1 attempt:1 curr:1 interest:1 light:2 natl:1 tj:3 stocker:1 implication:4 norwell:1 integral:2 closer:1 allocates:1 iv:1 taylor:1 logarithm:1 re:3 girshick:1 theoretical:1 jazayeri:1 column:1 earlier:1 rao:2 cover:1 ar:1 retains:1 newsome:1 lattice:2 maximization:2 cost:1 subset:1 uniform:10 too:1 varies:2 periodic:1 psychophysically:2 calibrated:1 density:36 peak:1 discriminating:1 probabilistic:2 lee:1 decoding:3 infomax:12 bethge:2 together:1 squared:6 again:2 von:1 hn:10 slowly:1 tile:6 cosyne:1 cognitive:1 warped:4 suggesting:1 sinusoidal:1 diversity:1 coding:10 sec:1 includes:2 jc:1 depends:1 closed:6 red:5 wm:1 jul:3 pand:1 convolutional:9 variance:1 spaced:2 generalize:1 bayesian:7 raw:1 parga:1 lengyel:1 phys:1 suffers:1 synaptic:1 against:1 frequency:17 naturally:1 associated:1 di:1 gain:24 auditory:1 ask:1 knowledge:2 amplitude:3 higher:7 dt:1 response:9 specify:1 maximally:1 formulation:3 mar:2 implicit:1 fiser:1 d:11 horizontal:1 nonlinear:2 gray:1 effect:2 validity:1 concept:1 unbiased:1 true:1 brown:1 wiring:1 width:19 whereby:1 m:1 generalized:3 faa:1 theoretic:1 latham:2 image:9 mcalpine:1 functional:1 spiking:2 empirically:2 jp:2 interpretation:1 kluwer:1 refer:3 measurement:4 naturforschung:1 surround:2 enter:1 tuning:57 sharpen:1 cortex:2 feb:2 berkes:1 recent:1 optimizing:2 beverley:1 inui:1 selectivity:1 inequality:1 supportive:1 gerwinn:1 der:2 seen:1 minimum:3 greater:1 maximize:1 signal:3 multiple:1 simoncelli:4 rj:1 smooth:2 match:2 characterized:1 academic:1 offer:3 believed:1 prediction:20 mansfield:1 heterogeneous:5 vision:4 expectation:1 poisson:4 patient:1 represent:3 achieved:1 cell:28 dec:4 addition:2 spacing:5 interval:2 publisher:1 subject:7 seem:1 emitted:1 near:1 affect:2 fit:1 prototype:1 expression:2 bair:2 allocate:2 colour:1 movshon:3 york:2 olmos:1 deep:1 generally:1 factorial:1 amount:2 dark:2 http:1 specifies:1 homunculus:1 alters:1 estimated:1 arising:1 correctly:2 per:1 neuroscience:2 blue:5 write:2 express:4 four:2 threshold:20 nevertheless:1 drawn:1 rewriting:1 utilize:1 v1:7 deneve:1 sum:2 convert:1 cone:1 inverse:1 fourth:1 uncertainty:1 striking:1 place:2 family:5 arrive:1 scaling:1 rotermund:1 entirely:1 bound:15 ki:1 simplification:1 encountered:2 activity:2 strength:1 occur:1 constraint:6 constrain:1 scene:3 encodes:1 min:2 optimality:1 hinkley:1 combination:1 poor:4 mcdonnell:1 spatiochromatic:1 jr:2 across:9 sclerosis:1 unity:1 tw:1 rev:2 primate:1 intuitively:1 rentschler:1 resource:13 equation:1 mechanism:1 tractable:1 gersho:1 end:1 tiling:2 operation:1 appropriate:3 rp:2 original:2 broaden:1 responding:1 opportunity:1 landy:1 sanger:1 macleod:1 testable:2 especially:1 murray:1 approximating:1 warping:1 objective:20 psychophysical:2 quantity:1 spike:8 receptive:3 primary:2 md:1 devote:1 exhibit:2 gradient:1 unclear:1 enhances:1 link:2 separate:1 sci:3 capacity:1 decoder:2 evenly:3 neurometric:1 assuming:1 length:1 code:9 relationship:11 minimizing:1 equivalently:1 difficult:2 kingdom:1 design:1 proper:1 exaggeration:1 unknown:1 allowing:2 vertical:1 neuron:27 howard:1 finite:1 bartol:1 heterogeneity:1 precise:2 dc:1 rn:5 arbitrary:1 specified:1 optimized:3 connection:1 narrow:1 macaque:4 able:1 gazzaniga:1 perception:9 reading:1 max:2 green:1 power:12 natural:12 rely:2 discrimax:12 predicting:1 ducom:1 nth:2 mn:1 inversely:2 sn:9 prior:35 understanding:1 literature:4 multiplication:1 relative:1 law:9 embedded:1 regan:1 limitation:2 allocation:5 proportional:3 versus:1 ingredient:1 remarkable:1 integrate:3 consistent:2 shadlen:1 principle:2 metabolic:1 editor:1 supported:2 bias:5 jh:1 understand:1 tick:1 institute:1 warp:2 explaining:1 taking:1 correspondingly:2 allow:1 wide:1 laughlin:1 van:2 regard:1 curve:40 lett:1 cumulative:2 sensory:21 made:2 exponentiating:2 simplified:1 approximate:6 nov:4 preferred:3 implicitly:1 deg:1 global:2 assumed:3 eero:2 spectrum:1 physiologically:1 continuous:4 table:7 nature:3 transfer:1 ca:1 symmetry:1 expansion:1 berens:1 domain:2 apr:1 neurosci:4 noise:2 arise:1 knill:1 neuronal:5 fig:16 ny:1 n:1 fails:1 position:2 guiding:1 comput:10 governed:1 perceptual:16 pe:2 third:2 annu:1 specific:1 inset:1 nyu:1 physiological:5 evidence:4 normalizing:1 intractable:1 consist:1 quantization:1 modulates:1 magnitude:1 nat:2 budget:1 entropy:1 smoothly:1 visual:3 expressed:5 scalar:4 monotonic:1 brunel:1 aa:1 determines:1 ma:3 oct:3 narrower:3 fisher:33 experimentally:1 determined:3 specifically:1 averaging:1 wt:1 total:4 specie:1 discriminate:1 experimental:3 e:1 mark:1 support:1 modulated:2 harper:1 hateren:1 incorporate:1 biol:1 correlated:2 |
3,458 | 4,131 | Efficient algorithms for learning kernels from
multiple similarity matrices with general convex loss
functions
Vikram Tankasali
Dept. of Computer Science & Automation,
Indian Institute of Science, Bangalore.
[email protected]
Achintya Kundu
Dept. of Computer Science & Automation,
Indian Institute of Science, Bangalore.
[email protected]
Chiranjib Bhattacharyya
Dept. of Computer Science & Automation,
Indian Institute of Science, Bangalore.
[email protected]
Aharon Ben-Tal
Faculty of Industrial Engg. & Management,
Technion - Israel Institute of Technology, Haifa.
[email protected]
Visiting Professor, CWI, Amsterdam
Abstract
In this paper we consider the problem of learning an n ? n kernel matrix from
m(? 1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require
sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not
apply if one uses arbitrary losses and often can not handle m > 1 case. We
present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1
case. One of the major contributions of the paper is to extend the well known Mirror Descent(MD) framework to handle Cartesian product of psd matrices. This
novel extension leads to an algorithm, called EMKL, which solves the problem in
2
n
O( m ?log
) iterations; in each iteration one solves an MKL involving m kernels
2
and m eigen-decomposition of n ? n matrices. By suitably defining a restriction
on the objective function, a faster version of EMKL is proposed, called REKL,
which avoids the eigen-decomposition. An alternative to both EMKL and REKL
is also suggested which requires only an SVM solver. Experimental results on real
world protein data set involving several similarity matrices illustrate the efficacy
of the proposed algorithms.
1 Introduction
Learning procedures based on positive semidefinite (psd) kernel functions, like Support vector machines (SVMs), have emerged as powerful tools for several learning tasks with wide applicability
[13]. In many applications it is relatively straightforward to define measures of similarity between
any pair of examples but extremely difficult to design a kernel function for accurate classification.
For instance, similarity score between two protein sequences given by various measures like BLAST
[1], Smith-Waterman [14], etc are not psd, whence cannot be substituted as kernel. In this paper, we
consider the problem of learning an optimal kernel matrix, from multiple similarity matrices, that
yields accurate classification.
Let the set of n ? n real symmetric matrices be denoted as Sn and the set of psd matrices as Sn+ .
Consider a binary classification problem with n training examples. Let y ? {+1 , ?1}n be the
1
vector of class labels and K ? Sn+ be a kernel matrix. The SVM formulation [13] computes a
performance measure ?(K) by solving
h
i
(1)
?(K) = max ?? 1 ? 0.5 ?? Y KY ? ,
??A
where A = {? ? Rn | ?? y = 0, 0 ? ? ? C1}, Y = diag(y), 1 = [ 1 . . . 1 ]? ? Rn and C user
defined positive constant.
1.1 Background and Related work
To the best of our knowledge the problem of handling multiple similarity matrices and arbitrary
convex losses has not been studied before. Existing literature has focussed on only one similarity
matrix and specific choices of loss function. In this section we briefly review the related literature.
The problem was first studied in [8] for a single similarity matrix. They introduced the following
optimization problem
minn ?(K) + ? kK ? Sk2F ,
(2)
K?S+
where S ? Sn , is a similarity matrix, whose (i, j)-th element S(i, j) represents the similarity between example pair i, j and ?(K) is defined in (1). By interchanging the maximization over ? and
minimization over K the authors note that the inner minimization admits a closed form solution:
(3)
K ? = S + (4?)?1 (Y ?)(Y ?)? + ,
where (X)
P + denotes the psd matrix obtained by clipping the negative eigen values
P of X to zero, i.e.,
if X = ni=1 ?i vi vi? is the eigen decomposition of X ? Sn then (X)+ = ni=1 max(?i , 0)vi vi? .
After plugging in the value of K ? authors suggest using sophisticated techniques like Analytic center
cutting plane (ACCP) method to solve the outer maximization in ?.
The formulation (2) was studied further in [5] where an iterative algorithm based on solving a
quadratically constrained linear program (QCLP) was proposed. In both the above approaches the
order of maximization and minimization has been interchanged which lead to optimization problems
that can be posed as semi-infinite quadratically constrained linear Programs (SIQCLP) [5]. In [6] an
alternate loss function kK ? SkF was studied which led to a Second Order Cone Program(SOCP)
formulation. The choice of kK ? Sk2F , as a measure of loss, is arbitrary. In this paper we generalize
the setting in (2) by providing an algorithm which works for any convex loss function. We note that
the method used in solving (2) utilizing (3) is specific to the loss function kK ? Sk2F and do not
apply generally. Apart from non-applicability of the existing methods to other loss functions it is
not clear how these procedures could be used to handle multiple similarity matrices.
Contributions: The key contribution of the paper is to design efficient procedures which can learn
a kernel matrix, K ? Sn+ , from m(? 1) similarity matrices, possibly indefinite, under general convex sub-differentiable loss function by using either SVM or MKL solvers. We study the problem in
two different settings. In the first setting we consider learning a single kernel matrix from multiple
similarity matrices under a general loss function. A novel algorithm, referred in the paper as ESKL,
based on the Mirror Descent (MD) [3] procedure is proposed. It is a provably convergent algorithm
n
which requires O( log
?2 ) calls to an SVM solver. In the second setting we consider learning separate
kernel matrix for each of the given similarity matrices and then aggregating them using a Multiple
Kernel Learning (MKL) setup. The resultant formulation is non-trivial to solve. We present EMKL
which is based on generalizing the existing MD setup to deal with Cartesian product of psd matri2
n
ces. Like the previous case it requires O( m ?log
) calls to an MKL solver. At every iteration the
2
algorithm also requires eigen-decomposition which is expensive. We present a related algorithm,
REKL, which does not require the expensive eigen-decomposition step but yields similar classification performance as EMKL. Apart from allowing general loss functions the procedures also opens
up new avenues for learning multiple kernels, which could be viable alternatives to the framework
proposed in [7].
The remainder of the paper is organized as follows: in section 2 we discuss problem formulation.
Our main contribution is in section 3, where we develop mirror descent algorithms for learning
kernel from multiple indefinite similarity matrices. We also analyze the complexity and convergence
properties of the proposed algorithms. Finally we present our experimental results in section 4.
2
2 Problem formulation
Given multiple similarity matrices {Si : i = 1, . . . , m} we consider the following formulation
m
i
h
X
L
(K
?
S
)
,
min
f
(K)
?
?(K)
+
?
(4)
i
i
n
K?S+ , tr(K)=?
i=1
where ? ? 0 is a trade-off parameter, Li : Sn ? R is a convex sub-differentiable loss function
operating on K and Si . We impose the trace constraint on K to ensure good generalization as in [7].
A more naturally suited formulation for handling multiple similarity matrices is to consider learning
individual kernel matrix Ki from similarity matrix SP
i , ?i and invoke a Multiple Kernel
Learning
n
?
K
|
K
?
S
,
?
?
0,
?i
. In [7] the
(MKL) setup to obtain a kernel matrix K ? K ,
i
i
i
i
+
i
MKL problem is proposed as
?(K1 , . . . , Km ) =
min
?(K) ,
(5)
K?K, tr(K)=?
where the kernels Ki ?s are fixed and ?i ?s are variable. Based on MKL we consider the following
kernel learning formulation
m
i
h
X
Li (Ki ? Si ) ,
min
F (K1 , . . . , Km ) ? ?(K1 , . . . , Km ) + ?
(6)
K1 , ..., Km
i=1
n
s.t. Ki ? S+ , tr(Ki ) = ? , i = 1, . . . , m.
Note that ?(K1 , . . . , Km ) can be obtained by solving any standard MKL formulation.
Nm
The restriction of ?(K1 , . . . , Km ) on the Cartesian product of sets
=
i=1 { Ki
P
?
?
v
v
|
?
?
0,
v
is
j-th
eigen
vector
of
S
},
yields
a
very
interesting
alternative
to
ij
ij
ij
ij
i
ij
j
(6). Based on this restriction we formulate the restricted kernel learning problem as
m
i
h
X
?i (?i , ?i ) ,
min n g(?1 , . . . , ?m ) ? ?(K1 , . . . Km ) + ?
?1 , ..., ?m ?R
(7)
i=1
Pm
P
?
, Ki ? Sn+ ,
?
=
?
,
i
=
1,
.
.
.
,
m,
s.t. Ki = j ?ij vij vij
ij
j=1
where ?i = [?i1 . . . ?in ]? denotes the eigen values of Si and ?i : Rn ? Rn ? R is a convex loss
function on ?i = [?i1 . . . ?in ]? .
We mention that the formulation (4) generalizes the existing single similarity matrix based formulations. For m = 1 with L(X) = kXk2F , L(X) = kXkF we recover the formulations in [8] and [6]
respectively (albeit with a trace constraint). Also the SOCP based spectrum modification learning
formulation [6] proposed in the context of single similarity matrix (m = 1) is a special case of (7).
3 Kernel Learning using Mirror Descent
In this section we derive general methods for solving (4) and (6) based on the following assumptions:
loss function Li is convex; a sub-gradient L?i can be computed efficiently and the computed subgradients are bounded. We also assume the availability of an efficient SVM / MKL solver.
3.1 Entropic single kernel learning (ESKL) algorithm
We denote the feasible set of kernels as K = {K ? Sn+ | tr(K) = ? } and its relative interior as
int(K) = {K ? Sn | tr(K) = ?, K is positive definite }. Note that K is convex and compact.
Define inner-product on Sn as hK, K ? i = tr(KK ? ). From Eqn. (1) we note that ?(K) is a convex
function of K. Therefore the objective function f in (4) is convex and Lipschitz continuous on K.
Let ?? denote a maximizer of the SVM dual (1). Then we can compute a sub-gradient of f as
Pm
(8)
f ? (K) = ?0.5 Y ?? ??? Y + ? i=1 L?i (K ? Si ) .
Thus the convex programming problem (4) satisfies the conditions for applying Mirror Descent
(MD) [2] scheme. To apply MD procedure we require a strongly convex and continuously differentiable function ? : K ? R. Following [2] we choose negative of matrix entropy as the candidate for
?:
Pn
?(K) = j=1 ?j log ?j ,
(9)
3
where (?1 , . . . , ?n ) are the eigen values of K ? K. With the above setup we derive an MD
algorithm, named entropic single kernel learning (ESKL) algorithm, similar to the entropic mirror
descent algorithm proposed in [2].
Algorithm 1 Entropic single kernel learning (ESKL) algorithm
Initialization: K (1) ? int(K). Set t = 0.
repeat
? t := t + 1.
? Obtain ?? i.e. a maximizer of the SVM dual problem (1) forP
kernel K = K (t) .
?
(t)
? ??
? Compute sub-gradient f (K ) := ?0.5 Y ? ? Y + ? j L?i (K (t) ? Si ).
? Choose suitable step-size ?t .
(t)
(t)
? Compute eigen decomposition f ? (K (t) ) = V (t) diag([d1 . . . dn ]) V (t) ? .
(t)
(t)
? ?j exp(??t dj )
(t+1)
:= Pn
? ?j
, ?j = 1, . . . , n.
(t)
(t)
?
exp(??
d
)
t
l=1 l h
l
i
(t+1)
(t+1)
? K (t+1) := V (t) diag ?1
. . . ?n
V (t) ? .
until Convergence
Proposition 3.1. Let f (t) denote the objective function value at t-th iteration and f ? be the optimal
objective value. If the ESKL algorithm is initialized with K (1) = n? I and the step-sizes are chosen
r
r
1
2 log n
2 log n
(t)
?
then min f ? f ? ? Lip(f )
, where Lip(f ) is a
as ?t =
1?t?T
Lip(f )
t
T
Lipschitz constant of f such that k f ? (K (t) ) kF ? Lip(F ) , t = 1, . . . , T .
Proof. The strong convexity constant of ? w.r.t. k ? kF norm is ? = ?1 . Let B? denote the Bregman
distance function [2] generated by ?. Then we have B? (K, K (1) ) ? ? log n , ? K ? K (assuming
n ? 3). We complete the proof by applying Theorem 4.2 of [2] to the ESKL algorithm.
3.2 Entropic multiple kernel learning (EMKL) algorithm
Consider the kernel learning formulation (6) which minimizes the distances of kernels {Ki : i =
1, . . . , m} from the corresponding similarity matrices {Si : i = 1, . . . , m} and simultaneously
learns an SVM classifier by performing multiple kernel learning (MKL) on those kernels. To learn
a non-sparse combination of kernels the following MKL formulation has been proposed in [10]:
m
h
i
X
1
1
?(K1 , . . . , Km ) ? max max ?? 1 ? ?? Y
Ki Y ? ,
(10)
? ??m ??A
2
?
i=1 i
P
where ?m = ? = [?1 . . . ?m ]? :
i ?i ? 1, ?i ? 0, ?i . With ?(K1 , . . . , Km ) as defined
above, the objective function F in (6) can be expressed as
X
Fi (Ki ; ?, ?) ,
F (K1 , . . . , Km ) = max max
? ??m ??A
(11)
i
1 ?
1 ? ? 2?1 i tr Ki Y ??? Y + ? Li (Ki ? Si ) , i = 1, . . . , m.
Fi (Ki ; ?, ?) = m
Nm
Nm n
n
m
:=
Let V :=
i=1 K ? V. Denote
i=1 S , K = {K ? S+ : tr(K) = ? } and K
P
m
?
?
K = (K1 , . . . , Km ) ? Km . Define inner product on V as hK,
K
i
:=
V
i=1 hKi , Ki i, where
Pm
?
?
hKi , Ki i = tr(Ki Ki ). Also define a norm on V as kKk =
i=1 kKi kF . From (10) we note
that ?(K1 , . . . , Km ) is a convex function of (K1 , . . . , Km ) over the compact space Km . Thus the
objective function F in (6) is convex and Lipschitz continuous on Km .
Lemma 3.1. Let (?? , ? ? ) be a solution of (10) and L?i be a sub-gradient of Li . Then a sub-gradient
of F is given by
F ? (K1 , . . . , Km ) =
?K1 F1 (K1 ; ?? , ? ? ) ? ? ? ?Km Fm (Km ; ?? , ? ? ) ,
(12)
1
?Ki Fi (Ki ; ?? , ? ? ) = ? ? Y ?? ??? Y + L?i (Ki ? Si ) , i = 1, . . . , m.
2 ?i
4
Proof. First, we observe that Fi (Ki ; ?, ?) is a convex function of Ki ? K and the expression of
?Ki Fi (Ki ; ?, ?) given in Eqn. (12) is precisely a sub-gradient of Fi . Therefore, we can write
Fi (Ki? ; ?, ?) ? Fi (Ki ; ?, ?) + h Ki? ? Ki , ?Ki Fi (Ki ; ?, ?) i , ?Ki? ? K.
(13)
Pm
?
?
By optimality of (?? , ? ? ) we have F (K1 , . . . , Km ) = P
i=1 Fi (Ki ; ? , ? ). Because of the
m
?
?
?
?
?
?
max operation over ?, ?, we have F (K1 , . . . , Km ) ?
i=1 Fi (Ki ; ? , ? ) for any K =
?
(K1? , . . . , Km
) ? Km . Applying (13) we arrive at
D
E
?
,
F (K1? , . . . , Km
) ? F (K1 , . . . , Km ) + K? ? K , F ? (K1 , . . . , Km )
V
Hence, F ? (K1 , . . . , Km ) given in (12) is a sub-gradient of F .
We develop a novel Mirror Descent procedure for problem (6) by defining a strongly convex and
continuously differentiable function ? on the product space Km as
Pm Pn
?(K) = i=1 j=1 ?i,j log ?i,j , K ? Km ,
(14)
where (?i,1 , . . . , ?i,n ) denote eigen values of Ki . The resulting algorithm, named entropic multiple
kernel learning (EMKL), is given below.
Algorithm 2 Entropic multiple kernel learning (EMKL) algorithm
(1)
Initialization: Ki ? int(K), i = 1, . . . , m. Set t = 0.
repeat
t := t + 1.
(t)
Obtain ?? , ? ? by solving the MKL problem (10) with Ki = Ki , i = 1, . . . , m.
for i = 1 to m do
(t)
(t)
? Compute sub-gradient ?Ki Fi (Ki ; ?? , ? ? ) := ? 2 1? ? Y ?? ??? Y + L?i (Ki ? Si ).
(t)
(t+1)
? ?i,j
(t+1)
:=
:=
? Ki
end for
until Convergence
i
(t)
? Find eigen decomposition ?Ki Fi (Ki ; ?? , ? ? ) = Vi
(t)
(t)
(t) ?
diag([di,1 . . . di,n ]) Vi
.
(t)
exp(??t di,j )
?
, j = 1, . . . , n.
Pn
(t)
(t)
l=1 ?i,l exp(??t di,l )
(t) ?
(t+1)
(t+1)
(t)
.
Vi diag([?i,1 . . . ?i,n ]) Vi
(t)
?i,j
Theorem 3.2. Let F (t) denote the objective function value at t-th iteration and F ? be the optimal
(1)
objective value. If the EMKL algorithm is initialized with Ki = n? I , ? i and the step-sizes are
r
r
1
2 log n
2 log n
(t)
?
then min F ? F ? ? m Lip(F )
, where
chosen as ?t =
1?t?T
Lip(F )
mt
T
(t)
Lip(F ) is a Lipschitz constant of F such that k ?Ki F (Ki ; ?, ?) kF ? Lip(F ) , ?i, t.
(t)
(t)
?
Proof. Let K? = (K1? , . . . , Km
) be an optimal solution of (6). Denote K(t) = K1 , . . . , Km .
We apply the convergence result presented as Theorem 4.2 in [2]. This leads to the following:
s
s
?
(t)
1
2 ? B? (K , K )
2 B? (K? , K(t) ) (15)
?t =
? min F (t) ? F ? ? Lip(F )
,
1?t?T
Lip(F )
t
?T
where ? > 0 is the strong convexity constant of ? and B? is the Bregman distance function generated by ?. For the ? function defined in Eqn. (14), we have ? = m1? . Assuming n ? 3, we also
have B? (K, K(1) ) ? m ? log n , ? K ? Km . Substituting values for B? and ? in (15) we obtain
the desired result.
5
3.3 Restricted entropic kernel learning (REKL) algorithm
The proposed EMKL algorithm is computationally expensive as it computes eign decomposition
of m matrices of dimension n ? n at every iteration. Here we propose an efficient algorithm
by considering the restricted kernel learning formulation (7) where ?(K1 , . . . , K
) is given in
Pm
n
Eqn. (10). We denote the feasible set for ?i as X := {?i ? Rn | ?ij ? 0, ?j, j=1 ?ij = ? },
which is a convex compact subset of Rn . We note that ? in (10) when
Nm viewed as a function of
?i ?s, is a convex function on the Cartesian product space X m :=
i=1 X . The loss function
?i is assumed to be a convex function of ?i with bounded sub-gradients. Hence, the objective
function g in (7) is convex and Lipschitz continuous over the compact space X m . Denote a subgradient of ?i as [ ??i1 ?i (?i , ?i ) , . . . , ??in ?i (?i , ?i ) ]? . We can compute a sub-gradient of ? as
?
?? = ( ??11 ? , ??12 ? , . . . , ??nn ? ), where ??ij ? = ? 2 1? ? ??? Y vij vij
Y ?? . We derive an MD
i
algorithm, named restricted entropic kernel learning (REKL), by extending the entropic mirror descent scheme [2] to deal with Cartesian product of simplices. This is achieved by defining a strongly
convex and continuously differentiable function ?e : X m ? R as
Pm Pn
?e (?1 , . . . , ?m ) = i=1 j=1 ?ij log ?ij , ?i ? X , ?i.
(16)
Algorithm 3 Restricted entropic kernel learning (REKL) algorithm
P
?
, i = 1, . . . , m.
Find eigen decomposition: Si = j ?ij vij vij
(1)
Initialization: ?i ? int(X ) , i = 1, . . . , m. Set t = 0.
repeat
t= t+1
P (t)
?
Obtain ?? , ? ? by solving the MKL problem (10) with Ki = j ?ij vij vij
, i = 1, . . . , m.
for i = 1 to m do
(t)
(t)
?
? Compute sub-gradient g ? ij := ? 2 1? ? ??? Y vij vij
Y ?? + ??ij ?i (?i , ?i ).
i
(t)
(t)
? ?ij exp ??t g ? ij
(t+1)
, j = 1, . . . , n.
:= P
? ?ij
(t)
n
? (t)
l=1 ?il exp ??t g il
end for
until Convergence
Proposition 3.2. Let g (t) denote the objective function value at t-th iteration and g ? be the optimal
(1)
objective value. If the REKL algorithm is initialized with ?i = n? 1 , ? i and the step-sizes are
r
r
1
2 log n
2 log n
(t)
?
chosen as ?t =
then min g ? g ? ? m Lip(g)
, where Lip(g)
1?t?T
Lip(g)
mt
T
(t)
is a Lipschitz constant of g such that |g ? ij | ? Lip(g) , i, = 1, . . . , m, j = 1, . . . , n, t = 1, . . . , T .
Proof. The proof is similar to that of Theorem 3.2.
3.4 Discussion
n
iterations (see Proposition 3.1), where in each iteration
The ESKL formulation requires O log
?2
one solvesan SVM
and eigen-decomposition of n ? n matrix. Both EMKL and REKL formulations
m2 log n
require O
iterations (see Theorem 3.2 and Proposition 3.2), and in each iteration one
?2
solves an MKL problem. However EMKL is more computationally expensive than REKL.
4 Experiments and Results
In this section we experimentally compare the proposed kernel learning formulations against
IndSVM [8] and the eigen transformation methods:
PDenoise, Flip, Shift [15]. Given an indefinite
similarity matrix S with eigen-decomposition S = j ?j vj vj? , eigen transformation methods genP
erate kernel matrix as K := j ?j vj vj? , where choice ?j ?s are: (a) Denoise: ?j = max(?j , 0),
6
(b) Flip: ?j = |?j |, (c) Shift: ?j = ?j ? ?, where ? = min{?1 , . . . , ?n , P
0}. We consider the following choices for the loss functions in ESKL / EMKL: [L1 ] L(K ?S) = i,j |K(i, j)?S(i, j)|,
P
2
[L2 ] L(K ? S) = kK ? SkF , [L3 ] L(K ? S) =
i,j |K(i, j) ? S(i, j)| . For REKL we
choose ?(?j , ?j ) = |?j ? ?j k2 , i.e., the Euclidean distance. Algorithm parameters are tuned using
standard 5 fold cross validation procedure. LibSVM [4] is used as the SVM solver. For each data set
we have considered equal number of positive and negative training / test samples. We report classification performance in terms of accuracy and F-score (expressed as % ) averaged over 5 different
train / test splits.
4.1 Data sets
We experimented on 10 different data sets including the data sets covered in [16, 6]. We have
generated the indefinite similarity matrices as prescribed in [16] for each of the following data sets:
Sonar, Liver disorder, Ionosphere, Diabetes and Heart. We have used the same similarity matrices
as in [6]:1 for the data sets Amazon, AuralSonar, Yeast-SW-5-7 and Yeast-SW-5-12 .
To test the proposed multiple similarity based formulations we experimented on a subset of the
SCOP database [9] taken from Protein Classification Benchmark Collection 2 . Considering proteins having < 40% sequence identity, we randomly select 8 super-families which have at least
45 proteins. We compute 3 different pairwise similarity measure for proteins: Psi-BLAST [1],
Smith-Waterman [14] and Needleman-Wunsch [11]. The similarity matrices obtained from these 3
similarity measures are indefinite in general.
4.2 Effect of various loss functions
We experimentally demonstrate the ability of the proposed ESKL algorithm in handling general
convex loss function. Classification performance is presented in Table 1. We observe that on Liver
disorder data set L2 loss performs better than L1 , L3 . Again, on Diabetes and Heart data sets
both L1 , L2 provides much better performance better than L3 . From Table 2 we observe that on
AuralSonar data set ESKL formulation works best with L3 loss function. But on Yeast-SW-5-7 data
set L1 loss function works best. Therefore we can say that the choice of loss function has an effect
on classification accuracy. This suggest the need for a general algorithm which provides flexibility
to choose loss function based on the data set. Hence in this paper we have developed the algorithms
keeping the choice of loss function almost open.
Table 1: Comparison of classification accuracy (odd rows) and F-score (even rows) on UCI data sets
Dataset
Sonar
Liver disorder
Ionosphere
Diabetes
Heart
Eigen Transformation
Denoise
Flip
Shift
71.5
72.5
76.5
70.0
70.6
75.0
57.6
54.5
55.5
55.4
53.8
52.9
87.3
89.6
91.2
87.6
89.9
91.4
63.9
58.7
64.4
65.0
58.8
65.1
73.3
63.8
75.8
73.1
65.1
76.9
IndSVM
[8]
76.5
74.8
59.7
55.8
91.5
91.8
70.2
71.4
76.3
76.5
[L1 ]
75.5
73.9
61.0
58.9
88.5
88.5
73.3
74.3
78.8
79.5
ESKL
[L2 ]
73.0
71.3
62.8
59.1
91.2
91.4
73.5
74.6
78.8
79.0
[L3 ]
75.5
74.1
60.7
57.5
91.5
91.8
69.8
71.0
76.3
76.5
4.3 Combining multiple sequence similarity matrices for Proteins
Consider the task of classifying proteins into super-families when multiple sequence similarity measures are available. We perform 1 vs rest classification experiments on each of the 8 protein superfamilies and report performance averaged over 5 train / test splits. One can extend IndSVM [8]
1
2
http://idl.ee.washington.edu/SimilarityLearning/
http://net.icgeb.org/benchmark/
7
Table 2: Comparison of classification accuracy (odd rows) and F-score (even rows) on real data sets
Dataset
Amazon
AuralSonar
Yeast-SW-5-7
Yeast-SW-5-12
Eigen Transformation
Denoise
Flip
Shift
83.8
83.8
85.0
84.8
84.8
85.9
87.0
87.0
87.0
86.5
86.3
86.3
75.5
70.0
74.0
77.1
72.4
74.1
86.0
85.5
86.0
87.1
85.8
87.6
IndSVM
[8]
87.5
86.9
88.0
87.3
77.0
77.7
90.0
90.9
[L1 ]
88.8
88.0
88.0
87.3
79.0
79.9
90.0
90.9
ESKL
[L2 ]
85.0
84.3
87.0
86.3
75.5
76.1
90.5
91.35
[L3 ]
88.8
88.0
90.0
89.1
76.5
77.2
90.0
90.9
Table 3: Comparison of classification accuracy (odd rows) and F-score (even rows) on Proteins
Super
family
a.4.1
b.1.18
b.29.1
b.40.4
c.1.8
c.3.1
c.47.1
c.67.1
Linear
SVM
51.9
67.5
63.8
73.9
70.6
55.6
66.9
74.8
58.8
29.5
91.9
90.9
88.1
85.7
88.8
87.2
Eigen
Denoise
53.1
68.1
62.5
73.1
80.6
76.3
68.1
75.4
75.0
65.1
97.5
97.4
86.2
87.8
90.6
89.6
IndSVM
[8]
54.4
68.7
58.1
70.7
75.6
67.0
59.4
71.1
66.9
50.1
95.6
95.4
76.2
81.5
90.6
89.6
simple MKL
[12]
56.9
69.9
65.6
74.7
77.5
70.2
70.0
76.7
73.7
62.7
95.0
94.7
90.0
88.8
91.2
90.3
ESKL
[L1 ]
70.0
77.1
71.9
78.6
85.0
83.5
71.9
77.3
80.6
74.6
95.6
95.2
84.4
85.7
81.9
76.8
EMKL
[L1 ]
73.1
78.9
75.6
80.6
83.8
82.1
68.8
76.3
85.0
80.8
95.6
95.4
90.6
90.3
91.2
90.3
REKL
k ? k2
84.4
86.5
74.4
78.6
75.0
71.0
76.2
78.5
80.0
77.9
96.2
96.0
84.4
86.4
93.1
92.6
originally proposed to handle single similarity matrix, to multiple similarity matrices by averaging
over the similarity matrices. We implement the linear SVM by considering similarities as feature
and computing a linear kernel. We also compare with a multiple kernel learning formulation, simple
MKL [12]. Denoised version of the similarity matrices are given as input to simple MKL. In Table 3 the proposed multiple similarity based kernel learning algorithms ESKL / EMKL / REKL are
compared with the other methods mentioned above. We observe significant performance improvement in most cases. We also note that REKL is computationally cheaper than EMKL but provides
reasonably good performance.
5 Conclusion
We have proposed three formulations, (4), (6), (7) for learning kernels from multiple similarity matrices. The key advantages of the proposed algorithms over the state of the art are: (i) require only
SVM / MKL solvers and does not require any other sophisticated tools; (ii) the algorithms are applicable for a wide choice of loss functions and multiple similarity functions. Proposed methods can
also be seen as an alternative to Multiple Kernel learning,which will be explored in future research.
Acknowledgments
Prof. Chiranjib Bhattacharyya was partly supported by Yahoo! faculty award grant.
8
References
[1] Stephen F. Altschul, Thomas L. Madden, Alejandro A. Schffer, Ro A. Schffer, Jinghui Zhang,
Zheng Zhang, Webb Miller, and David J. Lipman. Gapped blast and psiblast: a new generation
of protein database search programs. NUCLEIC ACIDS RES, 25:3389?3402, 1997.
[2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for
convex optimization. Operations Research Letters, 31:167?175, 2003.
[3] A. Ben-Tal, T. Margalit, and A. Nemirovski. The ordered subsets mirror descent optimization
method with applications to tomography. SIAM J. Optim., 12:79?108, 2001.
[4] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001.
Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[5] J. Chen and J. Ye. Training svm with indefinite kernels. In International Conference on
Machine Learning. 2008.
[6] Y. Chen, M. R. Gupta, and B. Recht. Learning kernels from indefinite similarities. In International Conference on Machine Learning. 2009.
[7] G. R. Gert Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and Michael I. Jordan. Learning
the kernel matrix with semidefinite programming. Journal of Machine Learning Research,
5:27?72, 2004.
[8] R. Luss and A. d?Aspremont. Support vector machine classification with indefinite kernels. In
Advances in Neural Information processing Systems. 2007.
[9] A. G. Murzin, S. E. Brenner, T. Hubbard, and C. Chothia. Scop: a structural classification
of proteins database for the investigation of sequences and structures. Journal of Molecular
Biology, 247:536?540, 1995.
[10] J. Saketha Nath, G. Dinesh, S. Raman, C. Bhattacharyya, A. Ben-Tal, and K.R. Ramakrishnan.
On the algorithmics and applications of a mixed-norm based kernel learning formulation. In
Advances in Neural Information Processing Systems, pages 844?852. 2009.
[11] Saul B. Needleman and Christian D. Wunsch. A general method applicable to the search
for similarities in the amino acid sequence of two proteins. Journal of Molecular Biology,
48(3):443?453, 1970.
[12] A. Rakotomamonjy, Francis R. Bach, S. Canu, and Y. Grandvalet. Simplemkl. Journal of
Machine Learning Research, 9:2491?2521, 2008.
[13] Bernhard Sch?olkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). The MIT Press, 2001.
[14] T. F. Smith and M. S. Waterman. Identification of common molecular subsequences. Journal
of Molecular Biology, 147(1):195 ? 197, 1981.
[15] G. Wu, Z. Zhang, and E. Y. Chang. An analysis of transformation on non-positive semidefinite similarity matrix for kernel machines. Technical Report, University of California, Santa
Barbara, 2005.
[16] Y. Ying, C. Campbell, and M. Girolami. Analysis of SVM with indefinite kernels. In Advances
in Neural Information processing Systems, 2009.
9
| 4131 |@word erate:1 version:2 faculty:2 briefly:1 norm:3 suitably:1 open:2 km:32 decomposition:11 idl:1 mention:1 tr:9 efficacy:1 score:5 tuned:1 bhattacharyya:3 past:1 existing:5 optim:1 si:11 engg:1 analytic:1 christian:1 v:1 plane:1 smith:3 provides:3 org:1 zhang:3 dn:1 viable:1 blast:3 pairwise:1 solver:8 considering:3 iisc:3 bounded:2 israel:1 minimizes:1 developed:1 transformation:5 every:2 ro:1 classifier:1 k2:2 grant:1 positive:5 before:1 aggregating:1 simplemkl:1 initialization:3 studied:5 kxk2f:1 nemirovski:1 averaged:2 acknowledgment:1 definite:1 implement:1 siqclp:1 procedure:8 protein:13 suggest:2 cannot:1 interior:1 context:1 applying:3 restriction:3 www:1 center:1 eskl:14 murzin:1 straightforward:1 convex:24 formulate:1 amazon:2 disorder:3 m2:1 utilizing:1 wunsch:2 handle:4 gert:1 user:1 programming:2 us:1 diabetes:3 lanckriet:1 element:1 expensive:4 database:3 csie:1 trade:1 mentioned:1 convexity:2 complexity:1 cristianini:1 gapped:1 solving:7 various:2 train:2 whose:1 emerged:1 posed:1 solve:2 say:1 ability:1 saketha:1 sequence:6 differentiable:5 advantage:1 net:1 propose:1 product:8 remainder:1 uci:1 combining:1 flexibility:1 ky:1 olkopf:1 convergence:5 extending:1 ben:3 illustrate:1 develop:2 ac:1 derive:3 liver:3 ij:20 odd:3 strong:2 solves:4 girolami:1 require:6 f1:1 generalization:1 ntu:1 investigation:1 proposition:4 extension:1 considered:1 exp:6 kki:1 substituting:1 major:1 interchanged:1 entropic:11 applicable:2 label:1 hubbard:1 tool:2 minimization:3 mit:1 super:3 pn:5 cwi:1 derived:1 improvement:1 hk:2 industrial:1 whence:1 nn:1 margalit:1 i1:3 provably:2 classification:14 dual:2 denoted:1 yahoo:1 constrained:2 special:1 ernet:3 art:1 equal:1 having:1 washington:1 lipman:1 biology:3 represents:1 future:1 interchanging:1 report:3 bangalore:3 randomly:1 simultaneously:1 individual:1 cheaper:1 beck:1 psd:6 zheng:1 semidefinite:3 accurate:2 bregman:2 skf:2 euclidean:1 initialized:3 haifa:1 desired:1 forp:1 re:1 instance:1 teboulle:1 kxkf:1 maximization:3 clipping:1 applicability:2 rakotomamonjy:1 subset:3 technion:2 recht:1 international:2 siam:1 ie:1 off:1 invoke:1 michael:1 continuously:3 again:1 nm:4 management:1 choose:4 possibly:1 chung:1 li:5 socp:3 scop:2 automation:3 availability:1 int:4 vi:8 closed:1 analyze:1 francis:1 recover:1 denoised:1 contribution:4 il:3 ni:2 accuracy:5 acid:2 efficiently:1 miller:1 yield:3 generalize:1 identification:1 lu:1 against:1 resultant:1 naturally:1 proof:6 di:4 psi:1 dataset:2 knowledge:1 organized:1 sophisticated:3 campbell:1 originally:1 formulation:25 strongly:3 smola:1 until:3 eqn:4 auralsonar:3 nonlinear:1 maximizer:2 mkl:19 qclp:1 yeast:5 effect:2 ye:1 needleman:2 hence:3 regularization:1 symmetric:1 dinesh:1 deal:2 complete:1 demonstrate:1 performs:1 l1:8 novel:3 fi:13 common:1 mt:2 hki:2 extend:2 m1:1 significant:1 pm:7 canu:1 dj:1 l3:6 similarity:42 operating:1 alejandro:1 etc:2 apart:2 barbara:1 altschul:1 binary:1 seen:1 impose:1 stephen:1 semi:1 multiple:25 ii:1 technical:1 faster:1 bach:1 cross:1 lin:1 molecular:4 award:1 plugging:1 involving:2 iteration:12 kernel:54 achieved:1 c1:1 background:1 sch:1 rest:1 nath:1 jordan:1 call:2 ee:1 structural:1 split:2 chothia:1 fm:1 inner:3 avenue:1 shift:4 expression:1 bartlett:1 generally:1 clear:1 santa:1 covered:1 extensively:1 tomography:1 svms:1 http:3 write:1 key:2 indefinite:9 ce:1 libsvm:3 subgradient:2 cone:1 letter:1 powerful:1 named:3 arrive:1 family:3 almost:1 chih:2 wu:1 raman:1 ki:48 convergent:2 fold:1 constraint:2 precisely:1 software:1 tal:3 extremely:1 min:9 optimality:1 subgradients:1 performing:1 prescribed:1 relatively:1 alternate:1 combination:1 tw:1 modification:1 restricted:5 ghaoui:1 heart:3 taken:1 computationally:3 chiranjib:2 discus:1 cjlin:1 flip:4 end:2 generalizes:1 aharon:1 operation:2 available:2 apply:4 observe:4 alternative:4 eigen:20 thomas:1 denotes:2 ensure:1 sw:5 k1:26 prof:1 objective:11 md:7 visiting:1 gradient:11 distance:4 separate:1 outer:1 trivial:1 assuming:2 minn:1 kk:6 providing:1 ying:1 difficult:1 setup:4 webb:1 trace:2 negative:3 design:2 perform:1 allowing:1 nucleic:1 benchmark:2 waterman:3 descent:10 defining:3 rn:6 arbitrary:3 introduced:1 david:1 pair:2 california:1 quadratically:2 algorithmics:1 beyond:1 suggested:1 below:1 program:4 max:8 including:1 suitable:1 kundu:1 sk2f:3 scheme:2 technology:1 library:1 madden:1 aspremont:1 sn:11 review:1 literature:2 l2:5 kf:4 relative:1 loss:27 mixed:1 interesting:1 generation:1 validation:1 vij:10 grandvalet:1 classifying:1 row:6 repeat:3 supported:1 keeping:1 institute:4 wide:2 saul:1 focussed:1 sparse:1 superfamily:1 dimension:1 world:1 avoids:1 computes:2 author:2 collection:1 adaptive:1 projected:1 compact:4 cutting:1 bernhard:1 assumed:1 spectrum:1 subsequence:1 continuous:3 iterative:2 search:2 vikram:2 sonar:2 table:6 lip:14 learn:2 reasonably:1 csa:3 substituted:1 diag:5 sp:1 vj:4 main:1 denoise:4 amino:1 referred:1 simplices:1 sub:13 candidate:1 learns:1 theorem:5 specific:2 jen:1 kkk:1 explored:1 experimented:2 svm:16 admits:1 ionosphere:2 gupta:1 albeit:1 mirror:10 cartesian:5 chen:2 suited:1 entropy:1 generalizing:1 led:1 amsterdam:1 expressed:2 ordered:1 chang:2 ramakrishnan:1 satisfies:1 chiru:1 viewed:1 identity:1 lipschitz:6 professor:1 feasible:2 experimentally:2 brenner:1 infinite:1 averaging:1 lemma:1 called:2 partly:1 experimental:2 select:1 support:4 alexander:1 indian:3 dept:3 d1:1 handling:3 |
3,459 | 4,132 | The Maximal Causes of Natural Scenes are Edge Filters
Gervasio Puertas?
Frankfurt Institute for Advanced Studies
Goethe-University Frankfurt, Germany
[email protected]
J?org Bornschein?
Frankfurt Institute for Advanced Studies
Goethe-University Frankfurt, Germany
[email protected]
?
J?org Lucke
Frankfurt Institute for Advanced Studies
Goethe-University Frankfurt, Germany
[email protected]
Abstract
We study the application of a strongly non-linear generative model to image
patches. As in standard approaches such as Sparse Coding or Independent Component Analysis, the model assumes a sparse prior with independent hidden variables. However, in the place where standard approaches use the sum to combine
basis functions we use the maximum. To derive tractable approximations for parameter estimation we apply a novel approach based on variational Expectation
Maximization. The derived learning algorithm can be applied to large-scale problems with hundreds of observed and hidden variables. Furthermore, we can infer
all model parameters including observation noise and the degree of sparseness.
In applications to image patches we ?nd that Gabor-like basis functions are obtained. Gabor-like functions are thus not a feature exclusive to approaches assuming linear superposition. Quantitatively, the inferred basis functions show a large
diversity of shapes with many strongly elongated and many circular symmetric
functions. The distribution of basis function shapes re?ects properties of simple
cell receptive ?elds that are not reproduced by standard linear approaches. In the
study of natural image statistics, the implications of using different superposition
assumptions have so far not been investigated systematically because models with
strong non-linearities have been found analytically and computationally challenging. The presented algorithm represents the ?rst large-scale application of such an
approach.
1
Introduction
If Sparse Coding (SC, [1]) or Independent Component Analysis (ICA; [2, 3]) are applied to image
patches, basis functions are inferred that closely resemble Gabor wavelet functions. Because of the
similarity of these functions to simple-cell receptive ?elds in primary visual cortex, SC and ICA
became the standard models to explain simple-cell responses, and they are the primary choice in
modelling the local statistics of natural images. Since they were ?rst introduced, many different
versions of SC and ICA have been investigated. While many studies focused on different ways to
ef?ciently infer the model parameters (e.g. [4, 5, 6]), many others investigated the assumptions used
in the underlying generative model itself. The modelling of observation noise can thus be regarded
as the major difference between SC and ICA (see, e.g., [7]). Furthermore, different forms of independent sparse priors have been investigated by many modelers [8, 9, 10], while other approaches
have gone a step further and studied a relaxation of the assumption of independence between hidden
variables [11, 12, 13].
?
authors contributed equally
1
An assumption that has, in the context of image statistics, been investigated relatively little is the
assumption of linear superpositions of basis functions. This assumption is not only a hallmark of
SC and ICA but, indeed, is an essential part of many standard algorithms including Principal Component Analysis (PCA), Factor Analysis (FA; [14]), or Non-negative Matrix Factorization (NMF;
[15]). For many types of data, linear superposition can be motivated by the actual combination rule
of the data components (e.g., sound waveforms combine linearly). For other types of data, including
visual data, linear superposition can represent a severe approximation, however. Models assuming
linearity are, nevertheless, often used because they are easier to study analytically and many derived algorithms can be applied to large-scale problems. Furthermore, they perform well in many
applications and may, to certain extents, succeed well in modelling the distribution, e.g., of local
image structure. From the perspective of probabilistic generative models, a major aim is, however,
to recover the actual data generating process, i.e., to recover the actual generating causes (see, e.g.,
[7]). To accomplish this, the crucial properties of the data generation should be modelled as realistically as possible. If the data components combine non-linearly, this should thus be re?ected
by the generative model. Unfortunately, inferring the parameters in probabilistic models assuming
non-linear superpositions has been found to be much more challenging than in the linear case (e.g.
[16, 17, 18, 19], also compare [20, 21]). To model image patches, for instance, large-scale applications of non-linear models, with the required large numbers of observed and hidden variables, have
so far not been reported.
In this paper we study the application of a probabilistic generative model with strongly non-linear
superposition to natural image patches. The basic model has ?rst been suggested in [19] where
tractable learning algorithms for parameter optimization where inferred for the case of a superposition based on a point-wise maximum. The model (which was termed Maximal Causes Analysis;
MCA) used a sparse prior for independent and binary hidden variables. The derived algorithms
compared favorably with state-of-the-art approaches on standard non-linear benchmarks and they
were applied to realistic data. However, the still demanding computational costs limited the application domain to relatively small-scale problems. The unconstrained model for instance was used
with at most H = 20 hidden units. Here we use a novel learning algorithm to infer the parameters
of a variant of the MCA generative model. The approach allows for scaling the model up to several
hundreds of observed and hidden variables. It enables large-scale applications to image patches and,
thus, allows for studying the inferred basis functions as it is commonly done for linear approaches.
2
The Maximal Causes Generative Model
Consider a set of N data points {?y (n) }n=1,...,N sampled independently from an underlying distribution (?y (n) ? D?1 , D is the number of observed variables). For these data we seek parameters
?N
? = (W, ?, ?) that maximize the data likelihood L = n=1 p(?y (n) | ?) under a variant of the MCA
generative model [19] which is given by:
?
p(?s | ?) =
? sh (1 ? ?)1?sh ,
(Bernoulli distribution)
(1)
0
h
p(?y | ?s, ?)
=
?
N (yd ; W d (?s, W ), ? 2 ) , where W d (?s, W ) = max{sh Wdh }
h
d
(2)
and where N (yd ; w, ? 2 ) denotes a scalar Gaussian distribution. H denotes the number of hidden
variables sh , and W ? RD?H . The model differs from the one previously introduced by the use
?h =
of Gaussian noise instead of Poisson noise in [19]. Eqn. 2 results in the basis functions W
(W1h , . . . , WDh )T of the MCA model to be combined non-linearly by a point-wise maximum. This
? ?} =
becomes salient if we compare (2) with the linear case using the vectorial notation maxh {W
h
?
?
T
? ? ? RD?1 :
maxh {W ? }, . . . , maxh {W ? } for vectors W
1h
p(?y | ?s, ?)
Dh
=
h
? h }, ? 2 1)
N (?y ; max{sh W
h
?
? h , ? 2 1)
N (?y ; h sh W
(non-linear superposition)
(3)
(linear superposition)
(4)
?
? h = W ?s ).
where N (?y ; ?
? , ?) denotes the multi-variate Gaussian distribution (note that h sh W
As in linear approaches such as SC, the combined basis functions set the mean values of the observed variables yd , which are independently and identically drawn from Gaussian distributions
p(?y | ?s, ?)
=
2
A
B
C
sum
D
sum
max
sum
max
max
Figure 1: A Example patches extracted from an image and preprocessed using a Difference of
Gaussians ?lter. B Two generated patches constructed from two Gabor basis functions with approximately orthogonal wave vectors. In the upper-right the basis functions were combined using linear
superposition. In the lower-right they were combined using a point-wise maximum (note that the
max was taken after channel-splitting (see Eqn. 15 and Fig. 2). C Superposition of two collinear Gabor functions using the sum (upper-right) or point-wise maximum (lower-right). D Cross-sections
through basis functions (along maximum amplitude direction). Left: Cross-sections through two
different collinear Gabor functions (compare C). Right: Cross-sections through their superpositions
using sum (top) and max (bottom).
with variance ? 2 (Eqn. 2). The difference between linear and non-linear superposition is illustrated
in Fig. 1. In general, the maximum superposition results in much weaker interferences. This is the
case for diagonally overlapping basis functions (Fig. 1B) and, at closer inspection, also for overlapping collinear basis functions (Fig. 1C,D). Strong interferences as with linear combinations can not
be expected from combinations of image components. For preprocessed image patches (compare
Fig. 4D), it could thus be argued that the maximum combination is closer to the actual combination
rule of image causes. In any case, the maximum represents an alternative to study the implications
of combination rules in the image domain.
To optimize the parameters ? of the MCA model (1) and (2), we use a variational EM approach (see,
e.g., [22]). That is, instead of maximizing the likelihood directly, we maximize the free-energy:
?
N ??
? ?
?
?
?
??
F(q, ?)=
+ H(q) ,
(5)
q (n) (?s ; ?? ) log p(?y (n) | ?s, W, ?) + log p(?s | ?)
n=1
?
s
where q (?s ; ? ) is an approximation to the exact posterior. In the variational EM scheme F(q, ?)
is maximized alternately with respect to q in the E-step (while ? is kept ?xed) and with respect to ?
in the M-step (while q is kept ?xed). As a multiple-cause model, an exact E-step is computationally
intractable for MCA. Additionally, the M-step is analytically intractable because of the non-linearity
in MCA. The computational intractability in the E-step takes the form of expectation values of
functions g, ?g(?s)?q(n) . These expectations are intractable if the optimal choice of q (n) in (5) is used
(i.e., if q (n) is equal to the posterior: q (n) (?s ; ?? ) = p(?s | ?y (n) , ?? )). To derive an ef?cient learning
algorithm, our approach approximates the intractable expectations ?g(?s)?q(n) by truncating the sums
over the hidden space of ?s:
?
?
p(?s, ?y (n) | ?? ) g(?s)
p(?s, ?y (n) | ?? ) g(?s)
(n)
?
?g(?s)?q(n) =
?
s
?
?
p( ?s , ?y
(n)
?
|? )
?
?
?
s?K n
?
?
p( ?s , ?y (n) | ?? )
,
(6)
?
?
s ?K n
?
s
where K n is a small subset of the hidden space. Eqn. 6 represents a good approximation if the
set K n contains most of the posterior probability mass. The approximation will be referred to
as Expectation Truncation and can be derived as a variational EM approach (see Suppl. A). For
other generative models similar truncation approaches have successfully been used [19, 23]. For
the
? learning algorithm, K n in (6) is chosen to contain hidden states ?s? with at most ? active causes
h sh ? ?. Furthermore, we only consider the combinatorics of H ? ? hidden variables. More
formally we de?ne:
? ?
??
(7)
K n = {?s |
j sj ? 1},
j sj ? ? and ?i ?? I : si = 0 or
where the index set I contains those H ? hidden variables that are the most likely to have generated
data point ?y (n) (the last term in Eqn. 7 assures that all states ?s with just one non-zero entry are also
3
evaluated). To determine the H ? hidden variables for I we use those variables h with the H ? largest
values of a selection function Sh (?y (n) ) which is given by:
? e? , ? 2 1) , with an effective weight W e? = max{yd , Wdh } . (8)
Sh (?y (n) ) = ? N (?y (n) ; W
dh
h
Selecting hidden variables based on Sh (?y (n) ) is equivalent to selecting them based on an upper
bound of p(sh =1 | ?y (n) , ?). To see this note that p(?y (n) | ?) is independent of h and that:
?
??
??
p(sh =1 | ?y (n) , ?) ?? ?
(n)
(n)
e?
p(yd | Wdh
, ?)
p(?s | ?),
=
p(yd | W d (?s, W ), ?) p(?s | ?) ?
(n)
p(?y
| ?)
d
d
?
s
sh = 1
?
s
sh = 1
with the right-hand-side being equal to Sh (?y (n) ) in Eqn. 8 (see Suppl. B for details). A low value
of Sh (?y (n) ) thus implies a low value of p(sh = 1 | ?y (n) , ?) and hence a low likelihood that cause h
has generated data point ?y (n) . In numerical experiments on ground-truth data we have veri?ed that
for most data points Eqn. 6 with Eqn. 7 indeed ?nally approximates the true expectation values with
high accuracy.
Having derived tractable approximations for the expectation values (6) in the E-step, let us now
derive parameter update equations in the M-step. An update rule for the weight matrix W of this
model was derived in [19] and is given by:
?
?
?
?
?
?
(n)
?
(9)
W (?s, W ) ,
Adh (?s, W ) =
?Adh (?s, W )?q(n) yd
?Wdh d
n?M
new
?H
? ?1
?
,
Wdh =
?
?A?dh (?s, W )?q(n)
?
?
W d (?s, W ) =
(sh Wdh )
,
(10)
n?M
h=1
where the parameter ? is set to a large value (we used ? = 20). The derivation of the update rule
for ? (Gaussian noise has previously not been used) is straight-forward, and the update equation is
given by:
?
? ???
?? ?
1
new
? h }??2
???y (n) ? max{sh W
=
?
.
(11)
qn
h
|M| D
n?M
Note that in (9) to (11) we do not sum over all data points ?y (n) but only over those in a subset M
(|M| is the number of elements in M). The subset contains the data points for which (6) ?nally
represents
a good approximation. It is de?ned to contain the N cut data points with largest values
?
s, ?y (n) | ?? ), i.e., with the largest values for the denominator in (6). N cut is hereby the
?
s?K n p(?
expected number of data points that have been generated by states with less or equal to ? non-zero
entries:
?
? ?
?
?
?
?
H
cut
? ? (1 ? ?)H?? .
(12)
= N
N
p(?s | ?) = N
?
?
?
? =0
?
s, |?
s|??
The selection of data points is an important difference to earlier truncation approaches (compare
[19, 23]), and its necessity can be shown analytically (Suppl. A).
Update equations (9), (10), and (11) have been derived by setting the derivatives of the free-energy
(w.r.t. W and ?) to zero. Similarly, we can derive the update equation for the sparseness parameter
?. However, as the approximation only considers states ?s with a maximum of ? non-zero entries, the
update has to correct for an underestimation of ? (compare Suppl. A). If such a correction is taken
into account, we obtain the update rule:
H
?
A(?) ? 1 ?
sh ,
(13)
?|?s |?qn with |?s | =
? new =
B(?) |M|
n?M
h=1
?
?
?
?
? ?
?
?
H
H
??
H???
? (1 ? ?)
and B(?) =
??
? ?? (1 ? ?)H??? .
(14)
A(?) =
??
??
??=0
??=0
A(?) ?
B(?)
Note that the correction factor
in (13) is equal to one over H if we allow for all possible states
(i.e., ? = H ? = H). Also the set M becomes equal to the set of all data points in this case (because
4
N cut = N ). For ? = H ? = H, Eqn. 13 thus falls back to the exact EM update rule that can canonically
be derived by setting the derivative of (5) w.r.t. ? to zero (while using the exact posterior). Also the
update equations (9), (10), and (11) fall back to their canonical form for ? = H ? = H. By choosing a
? between one and H we can thus choose the accuracy of the used approximation. The higher the
value of ? the more accurate is the approximation but the larger are also the computational costs. For
intermediate values of ? we can obtain very good approximations with small computational costs.
Crucial for the scalability to large-scale problems is hereby the preselection of H ? < H hidden
variables using the selection function in Eqn. 8.
Channel Splitting
Recombination
Learning Algorithm
+Channel
-Channel
non-neg. patches
basis functions
Figure 2: Illustration of patch preprocessing and basis function visualization. The left-hand-side
shows data points obtained from gray-value patches after DoG ?ltering. These patches are transformed to non-negative data by Eqn. 15. The algorithm maximizes the data likelihood under the
MCA model (1) and (2), and infers basis functions (second from the right). For visualization, the
basis functions are displayed after their parts have been recombined again.
3
Numerical Experiments
The update equations (9), (10), (11), and (13) together with approximation (6) with (7) and (8) de?ne
a learning algorithm that optimizes the full set of parameters of the MCA generative model (1) and
(2). We will apply the algorithm to visual data as received by the primary visual cortex of mammals.
In mammals, visual information is transferred to the cortex via two types of neurons in the lateral
geniculus nucleus (LGN): center-on and center-off cells. The sensitivity of center-on neurons can be
modeled by a Difference of Gaussians (DoG) ?lter with positive central part, while the sensitivity of
center-off cells can be modelled by an inverted such ?lter. A model for preprocessing of an image
patch is thus given by a DoG ?lter and a successive splitting of the positive and the negative parts
? = 26 ? 26
of the ?ltered image. More formally, we use a DoG ?lter to generate patches ?y? with D
? by assigning:
pixels. Such a patch is then converted to a patch of size D = 2 D
yd = [?
yd ]+ and yD+d = [??
y d ]+
(15)
(for d = 1, . . . , D) where [x]+ = x for x ? 0 and [x]+ = 0 otherwise. This procedure has
repeatedly been used in the context of visual data processing (see, e.g., [24]) and is, as discussed,
closely aligned with mammalian visual preprocessing (see Fig. 2 for an illustration).
Before we applied the algorithm to natural image patches, it was ?rst evaluated on arti?cial
data with ground-truth. As inferred basis functions of images most commonly resemble Gabor
wavelets, we used Gabor functions for the generation of arti?cial data. The Gabor basis functions
were combined according to the MCA generative model (1) and (2). We used H gen = 400 Gabor
functions for generation. The variances of the Gaussian envelop of each Gabor were sampled
from a distribution in nx /ny -space (Fig. 3C) with ?x and ?y denoting the standard deviations
of the Gaussian envelope, and with f denoting the Gabor frequency. Angular phases and centers of the Gabors were sampled from uniform distributions. The wave vector?s module was
1
set to 1 (f = 2?
) and the envelope amplitude was 10. The parameters were chosen to lie in the
same range as the parameters inferred in preliminary runs of the algorithm on natural image patches.
For the generation of each arti?cial patch we drew a binary vector ?s according to (1) with ?H gen = 2.
We then selected the |?s| corresponding Gabor functions and used channel-splitting (15) to convert
them into basis functions with only non-negative parts. To form an arti?cial patch, these basis functions were combined using the point-wise maximum according to (2). We generated N = 150 000
patches as data points in this way (Fig. 3A shows some examples).
The algorithm was applied with H = 300 hidden variables and approximation parameters ? = 3 and
H ? = 8. We generated the data with a larger number of basis functions to better match the continuous
? h were initialized
distribution of the real generating components of images. The basis functions W
5
A
C
B
ny
Figure 3: A Arti?cial patches generated by combining arti?cial Gabors
using a point-wise maximum. B Inferred basis functions if the MCA
learning algorithm is applied. C Comparison between the shapes of generating (green) and inferred (blue)
Gabors. The brighter the blue data
points the larger the error between the
basis function and the matched Gabor
(also for Fig. 5).
nx
by setting them to the average over all the preprocessed input patches plus a small Gaussian white
noise (? 0.5% of the corresponding mean). The initial noise parameter ? was set following Eqn. 11
by using all data points (setting |M| = N initially). Finally, the initial sparseness level was set to
? H = 2. The model parameters were updated according to Eqns. 9 to 13 using 60 EM iterations. To
help avoiding local optima, a small amount of Gaussian white noise (? 0.5% of the average basis
function value) was added during the ?rst 20 iterations, was linearly decreased to zero between
iterations 20 and 40, and kept at zero for the last 20 iterations. During the ?rst 20 iterations the
updates considered all N data points (|M| = N ). Between iteration number 20 and 40 the amount
of used data points was linearly decreased to (|M| = N cut ) where it was kept constant for the last
20 iterations. Considering all data points for the updates initially, has proven bene?cial because the
selection of data points is based on very incomplete knowledge during the ?rst iterations.
Fig. 3B displays some of the typical basis functions that were recovered in a run of the algorithm
on arti?cial patches. As can be observed (and as could have been expected), they resemble Gabor
functions. When we matched the obtained basis functions with Gabor functions (compare, e.g.,
[25, 26, 27] for details), the Gabor parameters obtained can be analyzed further. We thus plotted
the values parameterizing the Gabor shapes in an nx /ny -plot. This also allowed us to investigate
how well the generating distribution of arti?cial Gabors was recovered. Fig. 3C shows the generating (green) and the recovered distribution of Gabors (blue). Although some few recovered basis
functions lie a relatively distant from the generating distribution, it is in general recovered well. The
recovered sparseness level was with ? H = 2.62 a bit larger than the initial level of ? H gen = 2.
This is presumably due to the smaller number of basis function in the model H < H gen . Also
the ?nite inferred noise level of ? = 0.37 (despite a generation without noise) can be explained by
this mismatch. Depending on the parameters of the controls, we can observe different amounts of
outliers (usually not more than 5% ? 10%). These outliers are usually basis functions that represent
more than one Gabor or small Gabor parts. Importantly, however, we found that the large majority
of inferred Gabors consistently recovered the generating Gabor functions in nx /ny -plots. In particular, when we changed the angle of the generating distribution in the nx /ny -plots (e.g., to 25o or
65o ), the angle of the recovered distributions changed accordingly. Note that these controls are a
quantitative version of the arti?cial Gabor and grating data used for controls in [1].
Application to Image Patches. The dataset used in the experiment on natural images was prepared
? = 26 ? 26 pixels from the van Hateren image database
by sampling N = 200 000 patches of D
[28] (while constraining random selection to patches of images without man-made structures). We
preprocessed the patches as described above using a DoG ?lter1 with a ratio of 3 : 1 between positive
and negative parts (see, e.g., [29]) before converting the patches using Eqn. 15.
The algorithm was applied with H = 400 hidden variables and approximation parameters ? = 4
and H ? = 12. We used parameter initialization as described above and ran 120 EM iterations
(also as described above). After learning the inferred sparseness level was ? H = 1.63 and the
inferred noise level was ? = 1.59. The inferred basis functions we found to resembled Gabor-like
functions at different locations, and with different orientations and frequencies. Additionally,
we obtained many globular basis functions with no or very little orientation preferences. Fig. 4
shows a selection of the H = 400 functions after a run of the algorithm (see suppl. Fig. C.1 for
1
Filter parameters were chosen as in [27]; before the brightest 2% of the pixels were clamped to the maximal
value of the remaining 98% (in?uence of light-re?ections were reduced in this way).
6
A
B
C
D
E
Figure 4: Numerical experiment on image patches. A Random selection of 125 basis functions of
the H=400 inferred. B Selection of most globular functions and C most elongated functions. D Selection of preprocessed patches extracted from natural images. E Selection of data points generated
according to the model using the inferred basis functions and sparseness level (but no noise).
all functions). The patches in Fig. 4D,E were chosen to demonstrate the high similarity between
preprocessed natural patches (in D) and generated ones (in E). To highlight the diversity of obtained
basis functions, Figs. 4B,C display some of the most globular and elongated examples, respectively.
The variety of Gabor shapes is currently actively discussed [30, 31, 10, 32, 27] since it became
obvious that standard linear models (e.g., SC and ICA), could not explain this diversity [33]. To
facilitate comparison with earlier approaches, we have applied Gabor matching (compare [25])
and analyzed the obtained parameters. Instead of matching the basis functions directly, we ?rst
computed estimates of their corresponding receptive ?elds (RFs). These estimates were obtained by
convoluting the basis functions with the same DoG ?lter as used for preprocessing (see, e.g., [27]
and Suppl. C.1 for details). In controls we found that these convoluted ?elds were closely matched
by RFs estimated using reverse correlation as described, e.g., in [7].
90?
A
135?
B
C
ny
ny
45?
180?
Figure 5: Analysis of Gabor parameters (H=400).
A Anglefrequency plot of basis functions.
B nx /ny distribution of basis
functions. C Distribution measured in vivo [33] (red triangles)
and corresponding distribution of
MCA basis functions (blue).
0?
nx
nx
After matching the (convoluted) ?elds with Gabor functions, we found a relatively homogeneous
distribution of the ?elds? orientations as it is commonly observed (Fig. 5A). The frequencies are
distributed around 0.1 cycles per pixel, which re?ects the band-pass property of the DoG ?lter. To
analyze the Gabor shapes, we plotted the parameters using an nx /ny -plot (as suggested in [33]). The
broad distribution in nx /ny -space hereby re?ects the high diversity of basis functions obtained by
our algorithm (see Fig. 5B). The speci?c form of the obtained shape distribution is, hereby, similar
to the distribution of macaque V1 simple cells as measure in in vivo recordings [33]. However, the
MCA basis functions do quantitatively not match the measurements exactly (see Fig. 5C): the MCA
distribution contains a higher percentage of strongly elongated basis functions, and many MCA
functions are shifted slightly to the right relative to the measurements. If the basis functions are
matched with Gabors directly, we actually do not observe the latter effect (see suppl. Fig. C.2). If
simple-cell responses are associated with the posterior probabilities of multiple-cause models, the
basis functions should, however, not be compared to measured RFs directly (although it is frequently
done in the literature).
7
To investigate the implications of different numbers of hidden variables, we also ran the algorithm
with H = 200 and H = 800. In both cases we observed qualitatively and quantitatively similar
distributions of basis functions. Runs with H = 200 thus also contained many circular symmetric
basis functions (see suppl. Fig. C.3 for the distribution of shapes). This observation is remarkable
because it shows that such ?globular? ?elds are a very stable feature for the MCA approach, also for
small numbers of hidden variables. Based on standard generative models with linear superposition
it has recently been argued [32] that such functions are only obtained in a regime with large numbers
of hidden variables relative to the input dimensionality (see [34] for an early contribution).
4
Discussion
We have studied the application of a strongly non-linear generative model to image patches. The
model combines basis functions using a point-wise maximum as an alternative to the linear combination as assumed by Sparse Coding, ICA, and most other approaches. Our results suggest that
changing the component combination rule has a strong impact on the distribution of inferred basis functions. While we still obtain Gabor-like functions, we robustly observe a large variety of
basis functions. Most notably, we obtain circular symmetric functions as well as many elongated
functions that are closely associated with edges traversing the entire patch (compare Figs. 1 and 4).
Approaches using linear component combination, e.g. ICA or SC, do usually not show these features. The differences in basis function shapes between non-linear and linear approaches are, in this
respect, consistent with the different types of interferences between basis functions. The maximum
results in basis function combinations with much less pronounced interferences, while the stronger
interferences of linear combinations might result in a repulsive effect fostering less elongated ?elds
(compare Fig. 1).
For linear approaches, a large diversity of Gabor shapes (including circular symmetric ?elds) could
only be obtained in very over-complete settings [34], or speci?cally modelled priors with hand-set
sparseness levels [10]. Such studies were motivated by a recently observed discrepancy of receptive ?elds as predicted by SC or ICA, and receptive ?elds as measured in vivo [33]. Compared to
these measurements, the MCA basis functions and their approximate receptive ?elds show a similar
diversity of shapes. MCA functions and measured RFs both show circular symmetric ?elds and
in both cases there is a tendency towards ?elds elongated orthogonal to the wave-vector direction
(compare Fig. 4). Possible factors that can in?uence the distributions of basis functions, for MCA as
well as for other methods, are hereby different types of preprocessing, different prior distributions,
and different noise models. Even if the prior type is ?xed, differences for the basis functions have
been reported for different settings of prior parameters (e.g., [10]). If possible, these parameters
should thus be learned along with the basis functions. All the different factors named above may
result in quantitative differences, and the shift of the MCA functions relative to the measurements
might have been caused by one of these factors. For the MCA model, possible effects of assuming
binary hidden variables remain to be investigated. Presumably, also dependencies between hidden
variables as investigated in recent contributions [e.g. 13, 12, 11] play an important role, e.g., if larger
structures of speci?c arrangements of edges and textures are considered. As the components in such
models are combined less randomly, the implications of their combination rule may even be more
pronounced in these cases.
In conclusion, probably neither the linear nor the maximum combination rule does represent
the exact model for local visual component combinations. However, while linear component
combinations have extensively been studied in the context of image statistics, the investigation of
other combination rules has been limited to relatively small scale applications [17, 16, 35, 19].
Applying a novel training scheme, we could overcome this limitation in the case of the MCA
generative model. As with linear approaches, we found that Gabor-like basis functions are obtained.
The statistics of their shapes, a subject that is currently and actively discussed [31, 10, 32, 26, 27], is
markedly different, however. Future work should, thus, at least be aware that a linear combination
of components is not the only possible choice. To recover the generating causes of image patches,
a linear combination might, furthermore, not be the best choice. With the results presented in this
work, it can neither be considered as the only practical one anymore.
Acknowledgements. We gratefully acknowledge funding by the German Federal Ministry of Education and
Research (BMBF) in the project 01GQ0840 (BFNT Frankfurt) and by the German Research Foundation
(DFG) in the project LU 1196/4-1. Furthermore, we gratefully acknowledge support by the Frankfurt Center
for Scienti?c Computing (CSC Frankfurt) and thank Marc Henniges for his help with Fig. 2.
8
References
[1] B. A. Olshausen, D. J. Field. Emergence of simple-cell receptive ?eld properties by learning a sparse
code for natural images. Nature, 381:607 ? 609, 1996.
[2] P. Comon. Independent component analysis, a new concept? Signal Proc, 36(3):287?314, 1994.
[3] A. J. Bell, T. J. Sejnowski. The ?independent components? of natural scenes are edge ?lters. Vision
Research, 37(23):3327 ? 38, 1997.
[4] A. Hyv?arinen, E. Oja. A fast ?xed-point algorithm for independent component analysis. Neural Computation, 9(7):1483?1492, 1997.
[5] H. Lee, A. Battle, R. Raina, A. Ng. Ef?cient sparse coding algorithms. NIPS 22, 801?808, 2007.
[6] M. W. Seeger. Bayesian Inference and Optimal Design for the Sparse Linear Model. Journal of Machine
Learning Research, 759?813, 2008.
[7] P. Dayan, L. F. Abbott. Theoretical Neuroscience. MIT Press, Cambridge, 2001.
[8] P. Berkes, R. Turner, M. Sahani. On sparsity and overcompleteness in image models. NIPS 20, 2008.
[9] B. A. Olshausen, K. J. Millman. Learning sparse codes with a mixture-of-Gaussians prior. NIPS 12,
841?847, 2000.
[10] M. Rehn, F. T. Sommer. A network that uses few active neurones to code visual input predicts the diverse
shapes of cortical receptive ?elds. J Comp Neurosci, 22(2):135?146, 2007.
[11] A. Hyv?arinen, P. Hoyer. Emergence of phase-and shift-invariant features by decomposition of natural
images into independent feature subspaces. Neural Computation, 12(7):1705?1720, 2000.
[12] F. Sinz, E. P. Simoncelli, M. Bethge. Hierarchical modeling of local image features through Lp-nested
symmetric distributions. NIPS 22, 1696?1704, 2009.
[13] D. Zoran, Y. Weiss. The ?Tree-Dependent Components? of Natural Images are Edge Filters. NIPS 22,
2340?2348, 2009.
[14] B. S. Everitt. An Introduction to Latent Variable Models. Chapman and Hall, 1984.
[15] D. D. Lee, H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature,
401(6755):788?91, 1999.
[16] P. Dayan, R. S. Zemel. Competition and multiple cause models. Neural Computation, 7:565-579, 1995.
[17] E. Saund. A multiple cause mixture model for unsupervised learning. Neural Computation, 7:51-71, 1995.
[18] H. Lappalainen, X. Giannakopoulos, A. Honkela, J. Karhunen. Nonlinear independent component analysis using ensemble learning: Experiments and discussion. Proc. ICA, 2000.
[19] J. L?ucke, M. Sahani. Maximal causes for non-linear component extraction. Journal of Machine Learning
Research, 9:1227 ? 1267, 2008.
[20] N. Jojic, B. Frey. Learning ?exible sprites in video layers. CVPR, 199?206, 2001.
[21] N. Le Roux, N. Heess, J. Shotton, J. Winn. Learning a generative model of images by factoring appearance
and shape. Technical Report, Microsoft Research, 2010.
[22] R. Neal, G. Hinton. A view of the EM algorithm that justi?es incremental, sparse, and other variants.
M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998.
[23] J. L?ucke, R. Turner, M. Sahani, M. Henniges. Occlusive Components Analysis. NIPS, 1069-1077, 2009.
[24] P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning
Research, 5:1457?1469, 2004.
[25] J. P. Jones, L. A. Palmer. An evaluation of the two-dimensional gabor ?lter model of simple receptive
?elds in cat striate cortex. Journal of Neurophysiology, 58(6):1233 ? 1258, 1987.
[26] P. Berkes, B.L. White, J. Fiser. No evidence for active sparsi?cation in the visual cortex. NIPS 22, 2009.
[27] J. L?ucke. Receptive ?eld self-organization in a model of the ?ne-structure in V1 cortical columns. Neural
Computation, 21(10):2805?2845, 2009.
[28] J. H. van Hateren, A. van der Schaaf. Independent component ?lters of natural images compared with
simple cells in primary visual cortex. Proc Roy Soc London B, 265:359 ? 366, 1998.
[29] D. C. Somers, S. B. Nelson, M. Sur. An emergent model of orientation selectivity in cat visual cortical
simple cells. The Journal of Neuroscience, 15:5448 ? 5465, 1995.
[30] J. L?ucke. Learning of representations in a canonical model of cortical columns. Cosyne 2006, 100, 2006.
[31] S. Osindero, M. Welling, G. E. Hinton. Topographic product models applied to natural scene statistics.
Neural Computation, 18:381 ? 414, 2006.
[32] D. Arathorn, B. Olshausen, J. DiCarlo. Functional requirements of a visual theory. Workshop Cosyne.
www.cosyne.org/c/index.php?title=Functional requirements of a visual theory, 2007.
[33] D. L. Ringach. Spatial structure and symmetry of simple-cell receptive ?elds in macaque primary visual cortex. Journal of Neurophysiology, 88:455 ? 463, 2002. Data retrieved 2006 from
manuelita.psych.ucla.edu/?dario.
[34] B. A. Olshausen, D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1?
Vision Research, 37(23):3311?3325, 1997.
[35] S. Den?eve, T. Lochmann, U. Ernst. Spike based inference in a network with divisive inhibition. NeuralComp, Marseille, 2008.
9
| 4132 |@word neurophysiology:2 version:2 stronger:1 nd:1 ucke:4 hyv:2 seek:1 decomposition:1 arti:9 mammal:2 eld:19 initial:3 necessity:1 contains:4 selecting:2 denoting:2 nally:2 recovered:8 si:1 assigning:1 realistic:1 numerical:3 distant:1 csc:1 shape:14 enables:1 plot:5 update:13 generative:15 selected:1 accordingly:1 inspection:1 location:1 successive:1 preference:1 org:3 along:2 constructed:1 ect:3 combine:4 notably:1 indeed:2 expected:3 ica:10 frequently:1 nor:1 multi:1 little:2 actual:4 considering:1 becomes:2 project:2 notation:1 linearity:3 underlying:2 mass:1 maximizes:1 matched:4 occlusive:1 xed:4 psych:1 sinz:1 cial:10 quantitative:2 exactly:1 control:4 unit:1 positive:3 before:3 local:5 frey:1 despite:1 yd:10 approximately:1 might:3 plus:1 initialization:1 studied:3 challenging:2 factorization:3 limited:2 palmer:1 gone:1 range:1 practical:1 differs:1 procedure:1 nite:1 bell:1 gabor:39 matching:3 suggest:1 selection:10 context:3 applying:1 optimize:1 equivalent:1 elongated:7 www:1 center:6 maximizing:1 independently:2 truncating:1 focused:1 roux:1 splitting:4 rule:11 parameterizing:1 regarded:1 importantly:1 his:1 updated:1 play:1 exact:5 homogeneous:1 us:1 lochmann:1 element:1 roy:1 mammalian:1 cut:5 predicts:1 database:1 observed:9 bottom:1 module:1 role:1 cycle:1 marseille:1 ran:2 seung:1 zoran:1 basis:61 triangle:1 emergent:1 cat:2 derivation:1 fast:1 effective:1 london:1 sejnowski:1 sc:9 zemel:1 choosing:1 larger:5 cvpr:1 otherwise:1 statistic:6 topographic:1 emergence:2 itself:1 reproduced:1 bornschein:2 maximal:5 product:1 aligned:1 combining:1 gen:4 canonically:1 ernst:1 realistically:1 pronounced:2 convoluted:2 scalability:1 competition:1 rst:8 optimum:1 requirement:2 generating:10 incremental:1 object:1 help:2 derive:4 depending:1 measured:4 received:1 grating:1 soc:1 wdh:7 predicted:1 resemble:3 implies:1 strong:3 direction:2 waveform:1 closely:4 correct:1 filter:3 globular:4 education:1 argued:2 arinen:2 preliminary:1 recombined:1 investigation:1 correction:2 around:1 considered:3 ground:2 brightest:1 hall:1 presumably:2 major:2 early:1 estimation:1 proc:3 currently:2 superposition:16 title:1 largest:3 successfully:1 overcompleteness:1 federal:1 mit:1 gaussian:9 aim:1 derived:8 consistently:1 modelling:3 likelihood:4 bernoulli:1 seeger:1 inference:2 dayan:2 dependent:1 factoring:1 entire:1 initially:2 hidden:23 transformed:1 lgn:1 germany:3 pixel:4 orientation:4 art:1 spatial:1 schaaf:1 equal:5 aware:1 field:2 having:1 ng:1 sampling:1 chapman:1 extraction:1 represents:4 broad:1 jones:1 unsupervised:1 discrepancy:1 future:1 others:1 report:1 quantitatively:3 few:2 gq0840:1 randomly:1 oja:1 dfg:1 phase:2 envelop:1 microsoft:1 organization:1 circular:5 investigate:2 evaluation:1 severe:1 analyzed:2 sh:21 mixture:2 light:1 scienti:1 implication:4 accurate:1 edge:5 closer:2 orthogonal:2 traversing:1 tree:1 incomplete:1 initialized:1 re:5 plotted:2 overcomplete:1 uence:2 theoretical:1 instance:2 column:2 earlier:2 modeling:1 maximization:1 cost:3 deviation:1 subset:3 entry:3 hundred:2 uniform:1 osindero:1 reported:2 dependency:1 accomplish:1 combined:7 sensitivity:2 probabilistic:3 off:2 lee:2 together:1 bethge:1 again:1 central:1 choose:1 cosyne:3 derivative:2 actively:2 account:1 converted:1 de:6 diversity:6 coding:5 combinatorics:1 caused:1 saund:1 view:1 analyze:1 red:1 wave:3 recover:3 lappalainen:1 vivo:3 contribution:2 ltering:1 php:1 accuracy:2 became:2 variance:2 maximized:1 ensemble:1 modelled:3 bayesian:1 lu:1 fias:3 comp:1 straight:1 cation:1 explain:2 ed:1 energy:2 frequency:3 obvious:1 hereby:5 associated:2 modeler:1 sampled:3 dataset:1 knowledge:1 infers:1 dimensionality:1 amplitude:2 actually:1 back:2 higher:2 response:2 wei:1 done:2 evaluated:2 strongly:5 furthermore:6 just:1 angular:1 fiser:1 correlation:1 honkela:1 hand:3 eqn:13 nonlinear:1 overlapping:2 gray:1 arathorn:1 olshausen:4 facilitate:1 effect:3 dario:1 contain:2 true:1 concept:1 analytically:4 hence:1 jojic:1 symmetric:6 neal:1 illustrated:1 white:3 ringach:1 during:3 self:1 eqns:1 complete:1 demonstrate:1 image:37 variational:4 hallmark:1 novel:3 ef:3 wise:7 recently:2 funding:1 functional:2 discussed:3 approximates:2 kluwer:1 measurement:4 cambridge:1 frankfurt:12 everitt:1 rd:2 unconstrained:1 similarly:1 gratefully:2 luecke:1 stable:1 similarity:2 cortex:7 maxh:3 inhibition:1 berkes:2 posterior:5 recent:1 perspective:1 retrieved:1 optimizes:1 reverse:1 termed:1 selectivity:1 certain:1 binary:3 der:1 neg:1 inverted:1 ministry:1 mca:22 speci:3 converting:1 employed:1 determine:1 maximize:2 signal:1 multiple:4 sound:1 full:1 infer:3 simoncelli:1 technical:1 match:2 cross:3 equally:1 impact:1 variant:3 basic:1 denominator:1 vision:2 expectation:7 poisson:1 iteration:9 represent:3 suppl:8 cell:11 decreased:2 winn:1 crucial:2 envelope:2 veri:1 probably:1 markedly:1 recording:1 subject:1 jordan:1 ciently:1 eve:1 intermediate:1 constraining:1 identically:1 shotton:1 variety:2 independence:1 variate:1 brighter:1 shift:2 motivated:2 pca:1 collinear:3 sprite:1 neurones:1 cause:13 repeatedly:1 heess:1 amount:3 preselection:1 prepared:1 band:1 extensively:1 reduced:1 generate:1 percentage:1 canonical:2 shifted:1 estimated:1 neuroscience:2 per:1 blue:4 diverse:1 salient:1 nevertheless:1 drawn:1 changing:1 preprocessed:6 neither:2 abbott:1 lter:8 kept:4 v1:3 henniges:2 relaxation:1 sum:8 convert:1 run:4 angle:2 named:1 place:1 somers:1 patch:37 scaling:1 bit:1 bound:1 layer:1 display:2 lucke:1 vectorial:1 constraint:1 scene:3 ucla:1 ected:1 relatively:5 ned:1 transferred:1 according:5 combination:18 battle:1 smaller:1 slightly:1 em:7 remain:1 lp:1 comon:1 den:1 explained:1 outlier:2 invariant:1 interference:5 taken:2 computationally:2 equation:6 visualization:2 puertas:2 previously:2 assures:1 german:2 geniculus:1 tractable:3 studying:1 repulsive:1 gaussians:3 apply:2 observe:3 hierarchical:1 robustly:1 anymore:1 alternative:2 assumes:1 denotes:3 top:1 remaining:1 sommer:1 graphical:1 cally:1 recombination:1 added:1 arrangement:1 spike:1 receptive:11 primary:5 exclusive:1 fa:1 striate:1 strategy:1 hoyer:2 subspace:1 thank:1 lateral:1 majority:1 nx:10 nelson:1 extent:1 considers:1 assuming:4 code:3 sur:1 index:2 modeled:1 illustration:2 ratio:1 dicarlo:1 unfortunately:1 favorably:1 negative:7 design:1 contributed:1 perform:1 upper:3 observation:3 neuron:2 benchmark:1 acknowledge:2 displayed:1 hinton:2 nmf:1 inferred:16 introduced:2 dog:7 required:1 bene:1 learned:1 alternately:1 macaque:2 nip:7 suggested:2 usually:3 mismatch:1 regime:1 sparsity:1 rf:4 including:4 max:9 green:2 video:1 demanding:1 natural:15 raina:1 advanced:3 turner:2 scheme:2 ne:3 ltered:1 sahani:3 prior:8 literature:1 acknowledgement:1 millman:1 relative:3 highlight:1 generation:5 limitation:1 proven:1 remarkable:1 foundation:1 nucleus:1 degree:1 lters:2 consistent:1 rehn:1 editor:1 systematically:1 intractability:1 changed:2 diagonally:1 last:3 free:2 truncation:3 side:2 weaker:1 allow:1 institute:3 fall:2 sparse:12 van:3 distributed:1 overcome:1 giannakopoulos:1 cortical:4 qn:2 author:1 commonly:3 forward:1 preprocessing:5 made:1 qualitatively:1 far:2 welling:1 sj:2 approximate:1 uni:3 active:3 assumed:1 continuous:1 latent:1 additionally:2 channel:5 nature:2 symmetry:1 investigated:7 domain:2 marc:1 linearly:5 neurosci:1 noise:13 allowed:1 fig:24 referred:1 cient:2 ny:10 bmbf:1 inferring:1 goethe:3 lie:2 clamped:1 justi:1 wavelet:2 resembled:1 exible:1 evidence:1 essential:1 intractable:4 workshop:1 drew:1 texture:1 karhunen:1 sparseness:8 easier:1 likely:1 appearance:1 visual:15 contained:1 scalar:1 nested:1 truth:2 dh:3 extracted:2 succeed:1 towards:1 man:1 typical:1 principal:1 pas:1 tendency:1 e:1 divisive:1 underestimation:1 formally:2 support:1 latter:1 hateren:2 avoiding:1 |
3,460 | 4,133 | Learning Convolutional Feature Hierarchies for
Visual Recognition
Koray Kavukcuoglu1 , Pierre Sermanet1 , Y-Lan Boureau2,1 ,
Karol Gregor1 , Micha?el Mathieu1 , Yann LeCun1
1
Courant Institute of Mathematical Sciences, New York University
2
INRIA - Willow project-team?
{koray,sermanet,ylan,kgregor,yann}@cs.nyu.edu, [email protected]
Abstract
We propose an unsupervised method for learning multi-stage hierarchies of sparse
convolutional features. While sparse coding has become an increasingly popular
method for learning visual features, it is most often trained at the patch level.
Applying the resulting filters convolutionally results in highly redundant codes
because overlapping patches are encoded in isolation. By training convolutionally
over large image windows, our method reduces the redudancy between feature
vectors at neighboring locations and improves the efficiency of the overall representation. In addition to a linear decoder that reconstructs the image from sparse
features, our method trains an efficient feed-forward encoder that predicts quasisparse features from the input. While patch-based training rarely produces anything but oriented edge detectors, we show that convolutional training produces
highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves performance on a number of
visual recognition and detection tasks.
1
Introduction
Over the last few years, a growing amount of research on visual recognition has focused on learning
low-level and mid-level features using unsupervised learning, supervised learning, or a combination
of the two. The ability to learn multiple levels of good feature representations in a hierarchical
structure would enable the automatic construction of sophisticated recognition systems operating,
not just on natural images, but on a wide variety of modalities. This would be particularly useful for
sensor modalities where our lack of intuition makes it difficult to engineer good feature extractors.
The present paper introduces a new class of techniques for learning features extracted though convolutional filter banks. The techniques are applicable to Convolutional Networks and their variants,
which use multiple stages of trainable convolutional filter banks, interspersed with non-linear operations, and spatial feature pooling operations [1, 2]. While ConvNets have traditionally been trained
in supervised mode, a number of recent systems have proposed to use unsupervised learning to pretrain the filters, followed by supervised fine-tuning. Some authors have used convolutional forms of
Restricted Boltzmann Machines (RBM) trained with contrastive divergence [3], but many of them
have relied on sparse coding and sparse modeling [4, 5, 6]. In sparse coding, a sparse feature vector z
is computed so as to best reconstruct the input x through a linear operation with a learned dictionary
matrix D. The inference procedure produces a code z ? by minimizing an energy function:
L(x, z, D) =
?
1
||x ? Dz||22 + |z|1 ,
2
z ? = arg min L(x, z, D)
z
Laboratoire d?Informatique de l?Ecole Normale Sup?erieure (INRIA/ENS/CNRS UMR 8548)
1
(1)
Figure 1: Left: A dictionary with 128 elements, learned with patch based sparse coding model.
Right: A dictionary with 128 elements, learned with convolutional sparse coding model. The dictionary learned with the convolutional model spans the orientation space much more uniformly. In
addition it can be seen that the diversity of filters obtained by convolutional sparse model is much
richer compared to patch based one.
The dictionary is obtained by minimizing the energy 1 wrt D: minz,D L(x, z, D) averaged over a
training set of input samples. There are two problems with the traditional sparse modeling method
when training convolutional filter banks: 1: the representations of whole images are highly redundant because the training and the inference are performed at the patch level; 2: the inference for a
whole image is computationally expensive.
First problem. In most applications of sparse coding to image analysis [7, 8], the system is trained
on single image patches whose dimensions match those of the filters. After training, patches in
the image are processed separately. This procedure completely ignores the fact that the filters are
eventually going to be used in a convolutional fashion. Learning will produce a dictionary of filters
that are essentially shifted versions of each other over the patch, so as to reconstruct each patch
in isolation. Inference is performed on all (overlapping) patches independently, which produces a
very highly redundant representation for the whole image. To address this problem, we apply sparse
coding to the entire image at once, and we view the dictionary as a convolutional filter bank:
X
1
L(x, z, D) = ||x ?
Dk ? zk ||22 + |z|1 ,
2
K
(2)
k=1
where Dk is an s ? s 2D filter kernel, x is a w ? h image (instead of an s ? s patch), zk is a 2D
feature map of dimension (w + s ? 1) ? (h + s ? 1), and ??? denotes the discrete convolution
operator. Convolutional Sparse Coding has been used by several authors, notably [6].
To address the second problem, we follow the idea of [4, 5], and use a trainable, feed-forward, nonlinear encoder module to produce a fast approximation of the sparse code. The new energy function
includes a code prediction error term:
L(x, z, D, W ) =
X
X
1
||zk ? f (W k ? x)||22 + |z|1 ,
Dk ? zk ||22 +
||x ?
2
K
K
k=1
k=1
(3)
where z ? = arg minz L(x, z, D, W ) and W k is an encoding convolution kernel of size s ? s, and f
is a point-wise non-linear function. Two crucially important questions are the form of the non-linear
function f , and the optimization method to find z ? . Both questions will be discussed at length below.
The contribution of this paper is to address both issues simultaneously, thus allowing convolutional
approaches to sparse coding to scale up, and opening the road to real-time applications.
2
Algorithms and Method
In this section, we analyze the benefits of convolutional sparse coding for object recognition systems,
and propose convolutional extensions to the coordinate descent sparse coding (CoD) [9] algorithm
and the dictionary learning procedure.
2.1 Learning Convolutional Dictionaries
The key observation for modeling convolutional filter banks is that the convolution of a signal with
a given kernel can be represented as a matrix-vector product by constructing a special Toeplitzstructured matrix for each dictionary element and concatenating all such matrices to form a new
2
dictionary. Any existing sparse coding algorithm can then be used. Unfortunately, this method
incurs a cost, since the size of the dictionary then depends on the size of the input signal. Therefore,
it is advantageous to use a formulation based on convolutions rather than following the naive method
outlined above. In this work, we use the coordinate descent sparse coding algorithm [9] as a starting
point and generalize it using convolution operations. Two important issues arise when learning
convolutional dictionaries: 1. The boundary effects due to convolutions need to be properly handled.
2. The derivative of equation 2 should be computed efficiently. Since the loss is not jointly convex
in D and z, but is convex in each variable when the other one is kept fixed, sparse dictionaries are
usually learned by an approach similar to block coordinate descent, which alternatively minimizes
over z and D (e.g., see [10, 8, 4]). One can use either batch [7] (by accumulating derivatives over
many samples) or online updates [8, 6, 5] (updating the dictionary after each sample). In this work,
we use a stochastic online procedure for updating the dictionary elements.
The updates to the dictionary elements, calculated from equation 2, are sensitive to the boundary
effects introduced by the convolution operator. The code units that are at the boundary might grow
much larger compared to the middle elements, since the outermost boundaries of the reconstruction
take contributions from only a single code unit, compared to the middle ones that combine s?s units.
Therefore the reconstruction error, and correspondingly the derivatives, grow proportionally larger.
One way to properly handle this situation is to apply a mask on the derivatives of the reconstruction
error wrt z: DT ? (x ? D ? z) is replaced by DT ? (mask(x) ? D ? z), where mask is a term-by-term
multiplier that either puts zeros or gradually scales down the boundaries.
Algorithm 1 Convolutional extension to coordinate descent sparse coding[9]. A subscript index
(set) of a matrix represent a particular element. For slicing the 4D tensor S we adopt the MATLAB
notation for simplicity of notation.
function ConvCoD(x, D, ?)
Set: S = DT ? D
Initialize: z = 0; ? = DT ? mask(x)
Require: h? : smooth thresholding function.
repeat
z? = h? (?)
(k, p, q) = arg maxi,m,n |zimn ? z?imn | (k : dictionary index, (p.q) : location index)
bi = ?kpq
? = ? + (zkpq ? z?kpq ) ? align(S(:, k, :, :), (p, q))
zkpq = z?kpq , ?kpq = bi
until change in z is below a threshold
end function
The second important point in training convolutional dictionaries is the computation of the S =
DT ? D operator. For most algorithms like coordinate descent [9], FISTA [11] and matching pursuit [12], it is advantageous to store the similarity matrix (S) explicitly and use a single column at
a time for updating the corresponding component of code z. For convolutional modeling, the same
approach can be followed with some additional care. In patch based sparse coding, each element
(i, j) of S equals the dot product of dictionary elements i and j. Since the similarity of a pair of
dictionary elements has to be also considered in spatial dimensions, each term is expanded as ?full?
convolution of two dictionary elements (i, j), producing 2s?1?2s?1 matrix. It is more convenient
to think about the resulting matrix as a 4D tensor of size K ? K ? 2s ? 1 ? 2s ? 1. One should
note that, depending on the input image size, proper alignment of corresponding column of this
tensor has to be applied in the z space. One can also use the steepest descent algorithm for finding
the solution to convolutional sparse coding given in equation 2, however using this method would
be orders of magnitude slower compared to specialized algorithms like CoD [9] and the solution
would never contain exact zeros. In algorithm 1 we explain the extension of the coordinate descent
algorithm [9] for convolutional inputs. Having formulated convolutional sparse coding, the overall
learning procedure is simple stochastic (online) gradient descent over dictionary D:
?xi ? X training set : z ? = arg min L(xi , z, D), D ? D ? ?
z
?L(xi , z ? , D)
?D
(4)
The columns of D are normalized after each iteration. A convolutional dictionary with 128 elements
which was trained on images from Berkeley dataset [13] is shown in figure 1.
3
Figure 2: Left: Smooth shrinkage function. Parameters ? and b control the smoothness and location
of the kink of the function. As ? ? ? it converges more closely to soft thresholding operator.
Center: Total loss as a function of number of iterations. The vertical dotted line marks the iteration
number when diagonal hessian approximation was updated. It is clear that for both encoder functions, hessian update improves the convergence significantly. Right: 128 convolutional filters (W )
learned in the encoder using smooth shrinkage function. The decoder of this system is shown in
image 1.
2.2 Learning an Efficient Encoder
In [4], [14] and [15] a feedforward regressor was trained for fast approximate inference. In this
work, we extend their encoder module training to convolutional domain and also propose a new
encoder function that approximates sparse codes more closely. The encoder used in [14] is a simple
feedforward function which can also be seen as a small convolutional neural network: z? = g k ?
tanh(x ? W k ) (k = 1..K). This function has been shown to produce good features for object
recognition [14], however it does not include a shrinkage operator, thus its ability to produce sparse
representations is very limited. Therefore, we propose a different encoding function with a shrinkage
operator. The standard soft thresholding operator has the nice property of producing exact zeros
around the origin, however for a very wide region, the derivatives are also zero. In order to be able
to train a filter bank that is applied to the input before the shrinkage operator, we propose to use an
encoder with a smooth shrinkage operator z? = sh? k ,bk (x ? W k ) where k = 1..K and :
sh? k ,bk (s) = sign(s) ? 1/? k log(exp(? k ? bk ) + exp(? k ? |s|) ? 1) ? bk
(5)
Note that each ? k and bk is a singleton per each feature map k. The shape of the smooth shrinkage
operator is given in figure 2 for several different values of ? and b. It can be seen that ? controls the
smoothness of the kink of shrinkage operator and b controls the location of the kink. The function
?sh
is guaranteed to pass through the origin and is antisymmetric. The partial derivatives ?sh
?? and ?b
can be easily written and these parameters can be learned from data.
Updating the parameters of the encoding function is performed by minimizing equation 3. The additional cost term penalizes the squared distance between optimal code z and prediction z?. In a
sense, training the encoder module is similar to training a ConvNet. To aid faster convergence, we
use stochastic diagonal Levenberg-Marquardt method [16] to calculate a positive diagonal approximation to the hessian. We update the hessian approximation every 10000 samples and the effect
of hessian updates on the total loss is shown in figure 2. It can be seen that especially for the tanh
encoder function, the effect of using second order information on the convergence is significant.
2.3 Patch Based vs Convolutional Sparse Modeling
Natural images, sounds, and more generally, signals that display translation invariance in any dimension, are better represented using convolutional dictionaries. The convolution operator enables
the system to model local structures that appear anywhere in the signal. For example, if k ? k image
patches are sampled from a set of natural images, an edge at a given orientation may appear at any
location, forcing local models to allocate multiple dictionary elements to represent a single underlying orientation. By contrast, a convolutional model only needs to record the oriented structure once,
since dictionary elements can be used at all locations. Figure 1 shows atoms from patch-based and
convolutional dictionaries comprising the same number of elements. The convolutional dictionary
does not waste resources modeling similar filter structure at multiple locations. Instead, it models more orientations, frequencies, and different structures including center-surround filters, double
center-surround filters, and corner structures at various angles.
In this work, we present two encoder architectures, 1. steepest descent sparse coding with tanh
encoding function using g k ? tanh(x ? W k ), 2. convolutional CoD sparse coding with shrink
4
encoding function using sh?,b (x ? W k ). The time required for training the first system is much
higher than for the second system due to steepest descent sparse coding. However, the performance
of the encoding functions are almost identical.
2.4 Multi-stage architecture
Our convolutional encoder can be used to replace patch-based sparse coding modules used in multistage object recognition architectures such as the one proposed in our previous work [14]. Building
on our previous findings, for each stage, the encoder is followed by and absolute value rectification, contrast normalization and average subsampling. Absolute Value Rectification is a simple
pointwise absolute value function applied on the output of the encoder. Contrast Normalization
is the same operation used for pre-processing the images. This type of operation has been shown
to reduce the dependencies between components [17, 18] (feature maps in our case). When used in
between layers, the mean and standard deviation is calculated across all feature maps with a 9 ? 9
neighborhood in spatial dimensions. The last operation, average pooling is simply a spatial pooling
operation that is applied on each feature map independently.
One or more additional stages can be stacked on top of the first one. Each stage then takes the
output of its preceding stage as input and processes it using the same series of operations with
different architectural parameters like size and connections. When the input to a stage is a series of
feature maps, each output feature map is formed by the summation of multiple filters.
In the next sections, we present experiments showing that using convolutionally trained encoders in
this architecture lead to better object recognition performance.
3
Experiments
We closely follow the architecture proposed in [14] for object recognition experiments. As stated
above, in our experiments, we use two different systems: 1. Steepest descent sparse coding with
tanh encoder: SDtanh . 2. Coordinate descent sparse coding with shrink encoder: CDshrink . In
the following, we give details of the unsupervised training and supervised recognition experiments.
3.1 Object Recognition using Caltech 101 Dataset
The Caltech-101 dataset [19] contains up to 30 training images per class and each image contains
a single object. We process the images in the dataset as follows: 1. Each image is converted to
gray-scale and resized so that the largest edge is 151. 2. Images are contrast normalized to obtain
locally zero mean and unit standard deviation input using a 9 ? 9 neighborhood. 3. The short side
of each image is zero padded to 143 pixels. We report the results in Table 1 and 2. All results in
these tables are obtained using 30 training samples per class and 5 different choices of the training
set. We use the background class during training and testing.
Architecture : We use the unsupervised trained encoders in a multi-stage system identical to the
one proposed in [14]. At first layer 64 features are extracted from the input image, followed by a
second layers that produces 256 features. Second layer features are connected to fist layer features
through a sparse connection table to break the symmetry and to decrease the number of parameters.
Unsupervised Training : The input to unsupervised training consists of contrast normalized grayscale images [20] obtained from the Berkeley segmentation dataset [13]. Contrast normalization
consists of processing each feature map value by removing the mean and dividing by the standard
deviation calculated around 9 ? 9 region centered at that value over all feature maps.
First Layer: We have trained both systems using 64 dictionary elements. Each dictionary item is
a 9 ? 9 convolution kernel. The resulting system to be solved is a 64 times overcomplete sparse
coding problem. Both systems are trained for 10 different sparsity values ranging between 0.1 and
3.0.
Second Layer: Using the 64 feature maps output from the first layer encoder on Berkeley images,
we train a second layer convolutional sparse coding. At the second layer, the number of feature
maps is 256 and each feature map is connected to 16 randomly selected input features out of 64.
Thus, we aim to learn 4096 convolutional kernels at the second layer. To the best of our knowledge,
none of the previous convolutional RBM [3] and sparse coding [6] methods have learned such a
large number of dictionary elements. Our aim is motivated by the fact that using such large number
of elements and using a linear classifier [14] reports recognition results similar to [3] and [6]. In
both of these studies a more powerful Pyramid Match Kernel SVM classifier [21] is used to match
the same level of performance. Figure 3 shows 128 filters that connect to 8 first layer features. Each
5
Figure 3: Second stage filters. Left: Encoder kernels that correspond to the dictionary elements.
Right: 128 dictionary elements, each row shows 16 dictionary elements, connecting to a single
second layer feature map. It can be seen that each group extracts similar type of features from their
corresponding inputs.
row of filters connect a particular second layer feature map. It is seen that each row of filters extract
similar features since their output response is summed together to form one output feature map.
U
U+
Logistic Regression Classifier
SDtanh
CDshrink
PSD [14]
57.1 ? 0.6% 57.3 ? 0.5%
52.2%
57.6 ? 0.4% 56.4 ? 0.5%
54.2%
Table 1: Comparing SDtanh encoder to CDshrink encoder on Caltech 101 dataset using a single
stage architecture. Each system is trained using 64 convolutional filters. The recognition accuracy
results shown are very similar for both systems.
One Stage System: We train 64 convolutional unsupervised features using both SDtanh and
CDshrink methods. We use the encoder function obtained from this training followed by absolute value rectification, contrast normalization and average pooling. The convolutional filters used
are 9 ? 9. The average pooling is applied over a 10 ? 10 area with 5 pixel stride. The output of first
layer is then 64 ? 26 ? 26 and fed into a logistic regression classifier and Lazebnik?s PMK-SVM
classifier [21] (that is, the spatial pyramid pipeline is used, using our features to replace the SIFT
features).
Two Stage System: We train 4096 convolutional filters with SDtanh method using 64 input feature
maps from first stage to produce 256 feature maps. The second layer features are also 9 ? 9, producing 256 ? 18 ? 18 features. After applying absolute value rectification, contrast normalization
and average pooling (on a 6 ? 6 area with stride 4), the output features are 256 ? 4 ? 4 (4096)
dimensional. We only use multinomial logistic regression classifier after the second layer feature
extraction stage.
We denote unsupervised trained one stage systems with U , two stage unsupervised trained systems
with U U and ?+ ? represents supervised training is performed afterwards. R stands for randomly
initialized systems with no unsupervised training.
PMK-SVM [21] Classifier:
Hard quantization + multiscale pooling
+ intersection kernel SVM
SIFT [21]
64.6 ? 0.7%
RBM [3]
66.4 ? 0.5%
DN [6]
66.9 ? 1.1%
SDtanh (U)
65.7 ? 0.7%
Logistic Regression Classifier
PSD [14] (UU)
63.7
PSD [14] (U+ U+ )
65.5
tanh
SD
(UU)
65.3 ? 0.9%
SDtanh (U+ U+ )
66.3 ? 1.5%
Table 2: Recognition accuracy on Caltech 101 dataset using a variety of different feature representations using two stage systems and two different classifiers.
Comparing our U system using both SDtanh and CDshrink (57.1% and 57.3%) with the 52.2% reported in [14], we see that convolutional training results in significant improvement. With two layers
of purely unsupervised features (U U , 65.3%), we even achieve the same performance as the patchbased model of Jarrett et al. [14] after supervised fine-tuning (63.7%). Moreover, with additional
supervised fine-tuning (U + U + ) we match or perform very close to (66.3%) similar models [3, 6]
6
1
0.9
0.8
0.7
0.6
0.5
0.4
0.4
0.3
0.3
miss rate
miss rate
1
0.9
0.8
0.7
0.6
0.5
0.2
0.2
0.1
0.1
0.05
0.05
R+R+ (14.8%)
U+U+ (11.5%)
?2
10
?1
10
0
10
U+U+?bt0
U+U+?bt1
U+U+?bt2
U+U+?bt6
U+U+?bt3
U+U+?bt5
U+U+?bt4
?2
1
10
10
(23.6%)
(16.5%)
(13.8%)
(12.4%)
(11.9%)
(11.7%)
(11.5%)
?1
10
0
10
1
10
false positives per image
false positives per image
Figure 4: Results on the INRIA dataset with per-image metric. Left: Comparing two best systems
with unsupervised initialization (U U ) vs random initialization (RR). Right: Effect of bootstrapping
on final performance for unsupervised initialized system.
with two layers of convolutional feature extraction, even though these models use the more complex
spatial pyramid classifier (PMK-SVM) instead of the logistic regression we have used; the spatial
pyramid framework comprises a codeword extraction step and an SVM, thus effectively adding one
layer to the system. We get 65.7% with a spatial pyramid on top of our single-layer U system (with
256 codewords jointly encoding 2 ? 2 neighborhoods of our features by hard quantization, then max
pooling in each cell of the pyramid, with a linear SVM, as proposed by authors in [22]).
Our experiments have shown that sparse features achieve superior recognition performance compared to features obtained using a dictionary trained by a patch-based procedure as shown in Table 2. It is interesting to note that the improvement is larger when using feature extractors trained
in a purely unsupervised way, than when unsupervised training is followed by a supervised training
phase (57.1 to 57.6). Recalling that the supervised tuning is a convolutional procedure, this last
training step might have the additional benefit of decreasing the redundancy between patch-based
dictionary elements. On the other hand, this contribution would be minor for dictionaries which
have already been trained convolutionally in the unsupervised stage.
3.2 Pedestrian Detection
We train and evaluate our architecture on the INRIA Pedestrian dataset [23] which contains 2416
positive examples (after mirroring) and 1218 negative full images. For training, we also augment the
positive set with small translations and scale variations to learn invariance to small transformations,
yielding 11370 and 1000 positive examples for training and validation respectively. The negative set
is obtained by sampling patches from negative full images at random scales and locations. Additionally, we include samples from the positive set with larger and smaller scales to avoid false positives
from very different scales. With these additions, the negative set is composed of 9001 training and
1000 validation samples.
Architecture and Training
A similar architecture as in the previous section was used, with 32 filters, each 7 ? 7 for the first
layer and 64 filters, also 7 ? 7 for the second layer. We used 2 ? 2 average pooling between each
layer. A fully connected linear layer with 2 output scores (for pedestrian and background) was used
as the classifier. We trained this system on 78 ? 38 inputs where pedestrians are approximately
60 pixels high. We have trained our system with and without unsupervised initialization, followed
by fine-tuning of the entire architecture in supervised manner. Figure 5 shows comparisons of our
system with other methods as well as the effect of unsupervised initialization.
After one pass of unsupervised and/or supervised training, several bootstrapping passes were performed to augment the negative set with the 10 most offending samples on each full negative image
and the bigger/smaller scaled positives. We select the most offending sample that has the biggest
opposite score. We limit the number of extracted false positives to 3000 per bootstrapping pass.
As [24] showed, the number of bootstrapping passes matters more than the initial training set. We
find that the best results were obtained after four passes, as shown in figure 5 improving from 23.6%
to 11.5%.
Per-Image Evaluation
Performance on the INRIA set is usually reported with the per-window methodology to avoid postprocessing biases, assuming that better per-window performance yields better per-image perfor7
1
0.9
0.8
0.7
0.6
0.5
Shapelet?orig (90.5%)
PoseInvSvm (68.6%)
VJ?OpenCv (53.0%)
PoseInv (51.4%)
0.4
Shapelet (50.4%)
VJ (47.5%)
0.3
miss rate
FtrMine (34.0%)
Pls (23.4%)
0.2
HOG (23.1%)
HikSvm (21.9%)
LatSvm?V1 (17.5%)
0.1
MultiFtr (15.6%)
R+R+ (14.8%)
U+U+ (11.5%)
MultiFtr+CSS (10.9%)
0.05
LatSvm?V2 (9.3%)
FPDW (9.3%)
ChnFtrs (8.7%)
?2
?1
10
0
10
10
1
10
false positives per image
Figure 5: Results on the INRIA dataset with per-image metric. These curves are computed from the
bounding boxes and confidences made available by [25]. Comparing our two best systems labeled
(U + U + and R+ R+ )with all the other methods.
mance. However [25] empirically showed that the per-window methodology fails to predict the
performance per-image and therefore is not adequate for real applications. Thus, we evaluate the
per-image accuracy using the source code available from [25], which matches bounding boxes with
the 50% PASCAL matching measure ( intersection
> 0.5).
union
In figure 5, we compare our best results (11.5%) to the latest state-of-the-art results (8.7%) gathered
and published on the Caltech Pedestrians website1 . The results are ordered by miss rate (the lower
the better) at 1 false positive per image on average (1 FPPI). The value of 1 FPPI is meaningful for
pedestrian detection because in real world applications, it is desirable to limit the number of false
alarms.
It can be seen from figure 4 that unsupervised initialization significantly improves the performance
(14.8%vs11.5%). The number of labeled images in INRIA dataset is relatively small, which limits
the capability of supervised learning algorithms. However, an unsupervised method can model large
variations in pedestrian pose, scale and clutter with much better success.
Top performing methods [26], [27], [28], [24] also contain several components that our simplistic model does not contain. Probably, the most important of all is color information, whereas we
have trained our systems only on gray-scale images. Another important aspect is training on multiresolution inputs [26], [27], [28]. Currently, we train our systems on fixed scale inputs with very
small variation. Additionally, we have used much lower resolution images than top performing systems to train our models (78 ? 38 vs 128 ? 64 in [24]). Finally, some models [28] use deformable
body parts models to improve their performance, whereas we rely on a much simpler pipeline of
feature extraction and linear classification.
Our aim in this work was to show that an adaptable feature extraction system that learns its parameters from available data can perform comparably to best systems for pedestrian detection. We
believe by including color features and using multi-resolution input our system?s performance would
increase.
4
Summary and Future Work
In this work we have presented a method for learning hierarchical feature extractors. Two different
methods were presented for convolutional sparse coding, it was shown that convolutional training of
feature extractors reduces the redundancy among filters compared with those obtained from patch
based models. Additionally, we have introduced two different convolutional encoder functions for
performing efficient feature extraction which is crucial for using sparse coding in real world applications. We have applied the proposed sparse modeling systems using a successful multi-stage
architecture on object recognition and pedestrian detection problems and performed comparably to
similar systems.
In the pedestrian detection task, we have presented the advantage of using unsupervised learning for
feature extraction. We believe unsupervised learning significantly helps to properly model extensive
variations in the dataset where a pure supervised learning algorithm fails. We aim to further improve
our system by better modeling the input by including color and multi-resolution information.
1
http://www.vision.caltech.edu/Image Datasets/CaltechPedestrians/files/data-INRIA
8
References
[1] LeCun, Y, Bottou, L, Bengio, Y, and Haffner, P. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[2] Serre, T, Wolf, L, and Poggio, T. Object recognition with features inspired by visual cortex. In CVPR?05
- Volume 2, pages 994?1000, Washington, DC, USA, 2005. IEEE Computer Society.
[3] Lee, H, Grosse, R, Ranganath, R, and Ng, A. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML?09, pages 609?616. ACM, 2009.
[4] Ranzato, M, Poultney, C, Chopra, S, and LeCun, Y. Efficient learning of sparse representations with an
energy-based model. In NIPS?07. MIT Press, 2007.
[5] Kavukcuoglu, K, Ranzato, M, Fergus, R, and LeCun, Y. Learning invariant features through topographic
filter maps. In CVPR?09. IEEE, 2009.
[6] Zeiler, M, Krishnan, D, Taylor, G, and Fergus, R. Deconvolutional Networks. In CVPR?10. IEEE, 2010.
[7] Aharon, M, Elad, M, and Bruckstein, A. M. K-SVD and its non-negative variant for dictionary design. In
Papadakis, M, Laine, A. F, and Unser, M. A, editors, Society of Photo-Optical Instrumentation Engineers
(SPIE) Conference Series, volume 5914, pages 327?339, August 2005.
[8] Mairal, J, Bach, F, Ponce, J, and Sapiro, G. Online dictionary learning for sparse coding. In ICML?09,
pages 689?696. ACM, 2009.
[9] Li, Y and Osher, S. Coordinate Descent Optimization for l1 Minimization with Application to Compressed Sensing; a Greedy Algorithm. CAM Report, pages 09?17.
[10] Olshausen, B. A and Field, D. J. Sparse coding with an overcomplete basis set: a strategy employed by
v1? Vision Research, 37(23):3311?3325, 1997.
[11] Beck, A and Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Img. Sci., 2(1):183?202, 2009.
[12] Mallat, S and Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal
Processing, 41(12):3397:3415, 1993.
[13] Martin, D, Fowlkes, C, Tal, D, and Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV?01, volume 2,
pages 416?423, July 2001.
[14] Jarrett, K, Kavukcuoglu, K, Ranzato, M, and LeCun, Y. What is the best multi-stage architecture for
object recognition? In ICCV?09. IEEE, 2009.
[15] Gregor, K and LeCun, Y. Learning fast approximations of sparse coding. In Proc. International Conference on Machine learning (ICML?10), 2010.
[16] LeCun, Y, Bottou, L, Orr, G, and Muller, K. Efficient backprop. In Orr, G and K., M, editors, Neural
Networks: Tricks of the trade. Springer, 1998.
[17] Schwartz, O and Simoncelli, E. P. Natural signal statistics and sensory gain control. Nature Neuroscience,
4(8):819?825, August 2001.
[18] Lyu, S and Simoncelli, E. P. Nonlinear image representation using divisive normalization. In CVPR?08.
IEEE Computer Society, Jun 23-28 2008.
[19] Fei-Fei, L, Fergus, R, and Perona, P. Learning generative visual models from few training examples: an
incremental Bayesian approach tested on 101 object categories. In Workshop on Generative-Model Based
Vision, 2004.
[20] Pinto, N, Cox, D. D, and DiCarlo, J. J. Why is real-world visual object recognition hard? PLoS Comput
Biol, 4(1):e27, 01 2008.
[21] Lazebnik, S, Schmid, C, and Ponce, J. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. CVPR?06, 2:2169?2178, 2006.
[22] Boureau, Y, Bach, F, LeCun, Y, and Ponce, J. Learning mid-level features for recognition. In CVPR?10.
IEEE, 2010.
[23] Dalal, N and Triggs, B. Histograms of oriented gradients for human detection. In Schmid, C, Soatto, S,
and Tomasi, C, editors, CVPR?05, volume 2, pages 886?893, June 2005.
[24] Walk, S, Majer, N, Schindler, K, and Schiele, B. New features and insights for pedestrian detection. In
CVPR 2010, San Francisco, California.
[25] Doll?ar, P, Wojek, C, Schiele, B, and Perona, P. Pedestrian detection: A benchmark. In CVPR?09. IEEE,
June 2009.
[26] Doll?ar, P, Tu, Z, Perona, P, and Belongie, S. Integral channel features. In BMVC 2009, London, England.
[27] Doll?ar, P, Belongie, S, and Perona, P. The fastest pedestrian detector in the west. In BMVC 2010,
Aberystwyth, UK.
[28] Felzenszwalb, P, Girshick, R, McAllester, D, and Ramanan, D. Object detection with discriminatively
trained part based models. In PAMI 2010.
9
| 4133 |@word cox:1 middle:2 version:1 dalal:1 advantageous:2 triggs:1 crucially:1 imn:1 contrastive:1 incurs:1 offending:2 initial:1 series:3 contains:3 score:2 ecole:1 document:1 deconvolutional:1 existing:1 comparing:4 marquardt:1 written:1 shape:1 enables:1 update:5 v:3 greedy:1 selected:1 generative:2 item:1 steepest:4 short:1 record:1 location:8 simpler:1 zhang:1 mathematical:1 dn:1 become:1 consists:2 combine:1 manner:1 mask:4 notably:1 growing:1 multi:7 inspired:1 decreasing:1 window:4 project:1 notation:2 underlying:1 moreover:1 what:1 minimizes:1 finding:2 transformation:1 bootstrapping:4 sapiro:1 berkeley:3 every:1 classifier:11 scaled:1 schwartz:1 control:4 unit:4 uk:1 ramanan:1 appear:2 producing:3 before:1 positive:12 local:2 sd:1 limit:3 encoding:7 subscript:1 approximately:1 pami:1 inria:8 might:2 umr:1 initialization:5 micha:1 fastest:1 limited:1 bi:2 jarrett:2 averaged:1 lecun:7 testing:1 union:1 block:1 procedure:7 area:2 significantly:3 matching:4 convenient:1 pre:1 road:1 confidence:1 get:1 close:1 operator:12 put:1 kpq:4 applying:2 accumulating:1 www:1 map:18 center:4 dz:1 latest:1 starting:1 independently:2 convex:2 focused:1 resolution:3 simplicity:1 pure:1 slicing:1 insight:1 handle:1 traditionally:1 coordinate:8 variation:4 updated:1 latsvm:2 hierarchy:2 construction:1 cs:1 mallat:1 exact:2 origin:2 trick:1 element:22 recognition:21 particularly:1 expensive:1 updating:4 predicts:1 labeled:2 database:1 module:4 solved:1 calculate:1 region:2 connected:3 ranzato:3 plo:1 decrease:1 trade:1 intuition:1 schiele:2 multistage:2 cam:1 trained:20 orig:1 purely:2 efficiency:1 completely:1 basis:1 easily:1 represented:2 various:1 train:8 stacked:1 informatique:1 fast:4 cod:3 london:1 neighborhood:3 whose:1 encoded:1 richer:1 larger:4 cvpr:9 elad:1 reconstruct:2 compressed:1 encoder:23 ability:2 statistic:2 topographic:1 think:1 jointly:2 final:1 online:4 advantage:1 rr:1 propose:5 reconstruction:3 product:2 fr:1 neighboring:1 tu:1 achieve:2 multiresolution:1 deformable:1 kink:3 convergence:3 double:1 produce:10 karol:1 incremental:1 converges:1 object:13 help:1 depending:1 pose:1 minor:1 grating:1 dividing:1 c:1 uu:2 clipper:1 closely:3 filter:32 stochastic:3 centered:1 human:2 enable:1 mcallester:1 backprop:1 require:1 summation:1 extension:3 around:2 considered:1 exp:2 lyu:1 predict:1 opencv:1 dictionary:41 adopt:1 proc:1 applicable:1 bag:1 tanh:6 currently:1 sensitive:1 largest:1 minimization:1 mit:1 sensor:1 aim:4 normale:1 rather:1 avoid:2 shrinkage:9 resized:1 june:2 ponce:3 properly:3 improvement:2 pretrain:1 contrast:8 sense:1 inference:5 el:1 cnrs:1 entire:2 lecun1:1 perona:4 willow:1 going:1 comprising:1 pixel:3 overall:2 arg:4 orientation:4 issue:2 augment:2 pascal:1 classification:1 among:1 art:1 spatial:9 special:1 initialize:1 summed:1 field:1 equal:1 once:2 never:1 having:1 sampling:1 koray:2 atom:1 identical:2 represents:1 extraction:7 unsupervised:25 icml:3 washington:1 future:1 report:3 few:2 opening:1 oriented:4 randomly:2 composed:1 simultaneously:1 divergence:1 beck:1 replaced:1 phase:1 psd:3 recalling:1 detection:10 highly:4 evaluation:1 bt2:1 alignment:1 introduces:1 sh:5 yielding:1 edge:3 integral:1 partial:1 poggio:1 taylor:1 penalizes:1 initialized:2 walk:1 overcomplete:2 girshick:1 website1:1 column:3 modeling:8 soft:2 teboulle:1 ar:3 measuring:1 cost:2 deviation:3 recognizing:1 successful:1 reported:2 dependency:1 encoders:2 connect:2 international:1 siam:1 lee:1 regressor:1 connecting:1 together:1 squared:1 reconstructs:1 corner:2 derivative:6 li:1 converted:1 de:1 diversity:1 singleton:1 stride:2 coding:31 waste:1 includes:1 orr:2 pedestrian:13 matter:1 explicitly:1 depends:1 performed:6 view:1 break:1 analyze:1 sup:1 relied:1 capability:1 contribution:3 formed:1 accuracy:3 convolutional:52 efficiently:1 correspond:1 yield:1 gathered:1 generalize:1 bayesian:1 kavukcuoglu:2 comparably:2 none:1 published:1 detector:5 explain:1 energy:4 frequency:2 rbm:3 spie:1 sampled:1 gain:1 dataset:12 popular:1 knowledge:1 color:3 improves:4 segmentation:2 sophisticated:1 adaptable:1 feed:2 higher:1 courant:1 supervised:13 follow:2 dt:5 response:1 methodology:2 bmvc:2 formulation:1 though:2 shrink:2 box:2 just:1 stage:21 anywhere:1 convnets:1 until:1 hand:1 nonlinear:2 overlapping:2 lack:1 multiscale:1 mode:1 logistic:5 gray:2 believe:2 olshausen:1 building:1 effect:6 serre:1 contain:3 multiplier:1 normalized:3 usa:1 soatto:1 during:1 anything:1 levenberg:1 l1:1 postprocessing:1 image:46 wise:1 ranging:1 lazebnik:2 superior:1 specialized:1 multinomial:1 empirically:1 volume:4 interspersed:1 discussed:1 extend:1 approximates:1 significant:2 surround:3 smoothness:2 automatic:1 tuning:5 erieure:1 outlined:1 dot:1 similarity:2 operating:1 cortex:1 align:1 recent:1 showed:2 instrumentation:1 forcing:1 store:1 codeword:1 ecological:1 success:1 muller:1 caltech:6 seen:7 additional:5 care:1 preceding:1 employed:1 wojek:1 redundant:3 signal:6 july:1 fist:1 afterwards:1 simoncelli:2 desirable:1 multiple:5 full:4 sound:1 reduces:2 smooth:5 match:5 faster:1 convolutionally:4 cross:1 bach:2 segmented:1 england:1 bigger:1 papadakis:1 prediction:2 variant:2 regression:5 simplistic:1 scalable:1 essentially:1 metric:2 vision:3 iteration:3 kernel:8 represent:2 normalization:6 pyramid:7 histogram:1 cell:1 addition:3 background:2 fine:4 separately:1 whereas:2 laboratoire:1 grow:2 bt3:1 source:1 modality:2 crucial:1 pass:3 probably:1 pooling:9 file:1 bt0:1 chopra:1 feedforward:2 bengio:1 krishnan:1 variety:2 isolation:2 architecture:14 opposite:1 reduce:1 idea:1 haffner:1 motivated:1 handled:1 allocate:1 york:1 hessian:5 matlab:1 mirroring:1 adequate:1 useful:1 generally:1 proportionally:1 clear:1 deep:1 amount:1 clutter:1 mid:2 locally:1 processed:1 category:2 http:1 shifted:1 dotted:1 sign:1 neuroscience:1 per:17 diverse:1 discrete:1 group:1 key:1 redundancy:2 four:1 lan:1 threshold:1 schindler:1 kept:1 v1:2 padded:1 year:1 laine:1 angle:1 inverse:1 powerful:1 fppi:2 almost:1 yann:2 architectural:1 patch:21 layer:25 followed:7 guaranteed:1 display:1 fei:2 scene:1 tal:1 aspect:1 min:2 span:1 performing:3 expanded:1 optical:1 relatively:1 martin:1 combination:1 across:1 smaller:2 increasingly:1 osher:1 iccv:2 restricted:1 gradually:1 invariant:1 pipeline:2 computationally:1 equation:4 resource:1 rectification:4 eventually:1 wrt:2 fed:1 end:1 photo:1 pursuit:2 operation:9 available:3 mance:1 aharon:1 apply:2 doll:3 hierarchical:3 v2:1 pierre:1 fowlkes:1 batch:1 slower:1 denotes:1 top:4 include:2 subsampling:1 zeiler:1 especially:1 society:3 gregor:1 tensor:3 malik:1 question:2 already:1 codewords:1 strategy:1 traditional:1 diagonal:3 gradient:3 distance:1 convnet:1 sci:1 decoder:2 assuming:1 code:10 length:1 index:3 pointwise:1 dicarlo:1 minimizing:3 sermanet:1 difficult:1 unfortunately:1 hog:1 stated:1 negative:7 design:1 proper:1 boltzmann:1 perform:2 allowing:1 pmk:3 vertical:1 convolution:10 observation:1 datasets:1 benchmark:1 descent:13 november:1 situation:1 team:1 dc:1 august:2 introduced:2 bk:5 pair:1 required:1 trainable:2 extensive:1 connection:2 tomasi:1 california:1 learned:8 nip:1 address:3 able:1 beyond:1 below:2 usually:2 sparsity:1 poultney:1 including:4 max:1 belief:1 natural:6 rely:1 improve:2 jun:1 naive:1 extract:2 schmid:2 nice:1 loss:3 fully:1 discriminatively:1 interesting:1 bt1:1 validation:2 thresholding:4 editor:3 bank:6 translation:2 row:3 summary:1 repeat:1 last:3 side:1 bias:1 institute:1 wide:2 felzenszwalb:1 correspondingly:1 absolute:5 sparse:46 benefit:2 outermost:1 boundary:5 dimension:5 calculated:3 stand:1 curve:1 world:3 evaluating:1 ignores:1 forward:2 author:3 made:1 sensory:1 san:1 transaction:1 ranganath:1 approximate:1 bruckstein:1 mairal:1 img:1 belongie:2 francisco:1 xi:3 fergus:3 alternatively:1 grayscale:1 iterative:1 why:1 table:6 additionally:3 channel:1 learn:3 zk:4 nature:1 symmetry:1 improving:1 shapelet:2 bottou:2 complex:1 constructing:1 domain:1 vj:2 antisymmetric:1 whole:3 bounding:2 arise:1 alarm:1 body:1 e27:1 biggest:1 west:1 en:2 ng:1 fashion:1 grosse:1 aid:1 fails:2 comprises:1 concatenating:1 comput:1 minz:2 extractor:4 learns:1 down:1 removing:1 showing:1 sift:2 maxi:1 nyu:1 dk:3 svm:7 unser:1 sensing:1 workshop:1 quantization:2 false:7 adding:1 effectively:1 magnitude:1 boureau:1 intersection:2 simply:1 visual:7 ordered:1 pls:1 pinto:1 springer:1 wolf:1 extracted:3 acm:2 formulated:1 replace:2 change:1 fista:1 hard:3 uniformly:1 miss:4 engineer:2 total:2 pas:3 invariance:2 svd:1 divisive:1 meaningful:1 rarely:1 select:1 mark:1 evaluate:2 tested:1 biol:1 |
3,461 | 4,134 | Sidestepping Intractable Inference
with Structured Ensemble Cascades
David Weiss?
Benjamin Sapp?
Ben Taskar
Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104, USA
{djweiss,bensapp,taskar}@cis.upenn.edu
Abstract
For many structured prediction problems, complex models often require adopting
approximate inference techniques such as variational methods or sampling, which
generally provide no satisfactory accuracy guarantees. In this work, we propose
sidestepping intractable inference altogether by learning ensembles of tractable
sub-models as part of a structured prediction cascade. We focus in particular on
problems with high-treewidth and large state-spaces, which occur in many computer vision tasks. Unlike other variational methods, our ensembles do not enforce
agreement between sub-models, but filter the space of possible outputs by simply
adding and thresholding the max-marginals of each constituent model. Our framework jointly estimates parameters for all models in the ensemble for each level of
the cascade by minimizing a novel, convex loss function, yet requires only a linear
increase in computation over learning or inference in a single tractable sub-model.
We provide a generalization bound on the filtering loss of the ensemble as a theoretical justification of our approach, and we evaluate our method on both synthetic
data and the task of estimating articulated human pose from challenging videos.
We find that our approach significantly outperforms loopy belief propagation on
the synthetic data and a state-of-the-art model on the pose estimation/tracking
problem.
1
Introduction
We address the problem of prediction in graphical models that are computationally challenging because of both high-treewidth and large state-spaces. A primary example where intractable, large
state-space models typically arise is in dynamic state estimation problems, including tracking articulated objects or multiple targets [1, 2]. The complexity stems from interactions of multiple
degrees-of-freedom (state variables) and fine-level resolution at which states need to be estimated.
Another typical example arises in pixel-labeling problems where the model topology is typically a
2D grid and the number of classes is large [3]. In this work, we propose a novel, principled framework called Structured Ensemble Cascades for handling state complexity while learning complex
models, extending our previous work on structured cascades for low-treewidth models [4].
The basic idea of structured cascades is to learn a sequence of coarse-to-fine models that are optimized to safely filter and refine the structured output state space, speeding up both learning and
inference. While we previously assumed (sparse) exact inference is possible throughout the cascade [4], in this work, we apply and extend the structured cascade framework to intractable hightreewidth models. To avoid intractable inference, we decompose the desired model into an ensemble
of tractable sub-models for each level of the cascade. For example, in the problem of tracking articulated human pose, each sub-model includes temporal dependency for a single body joint only.
?
These authors have contributed equally.
1
(a)
Level m+1
Level m
Sub-models
Full
Model
+
Inference
(b)
!"#$%&'&&
!"#$%"&'('")$*&+,%"&'-*.+")#/'
Sum
?
Sub-models
?
?m
(x, yj ) ? tm (x, ?)
Full
Model
+
Ym
Thresholding
Y m+1
Refinement
Inference
*%+%,&!"
!"#$%&(&&
0'1*2',)34''
Sum
?
?
?m+1
(x, yj ) ? tm+1 (x, ?)
Y m+1
Thresholding
Y m+2
Refinement
*%+%,&!#$&
!"#$%&)&&
0')$56+',)34''
!"#$%&'&&
!"#$%&(&&
!"#$%&)&&
0'+")#"'
Figure 1: (a) Schematic overview of structured ensemble cascades. The m?th level of the cascade takes as
input a sparse set of states Y m for each variable yj . The full model is decomposed into constituent sub-models
(above, the three tree models used in the pose tracking experiment) and sparse inference is run. Next, the
max marginals
Pof the sub-models are summed to produce a single max marginal for each variable assignment:
?? (x, yj ) = p ?p? (x, yj ). Note that each level and each constituent model will have different parameters as
a result of the learning process. Finally, the state spaces are thresholded based on the max-marginal scores and
low-scoring states are filtered. Each state is then refined according to a state hierarchy (e.g., spatial resolution,
or semantic categories) and passed to the next level of the cascade. This process can be repeated as many times
as desired. In (b), we illustrate two consecutive levels of the ensemble cascade on real data, showing the filtered
hypotheses left for a single video example.
To maintain efficiency, inference in the sub-models of the ensemble is uncoupled (unlike in dual
decomposition [5]), but the decision to filter states depends on the sum of the max-marginals of
the constituent models (see Figure 1). We derive a convex loss function for joint estimation of submodels in each ensemble, which provably balances accuracy and efficiency, and we propose a simple
stochastic subgradient algorithm for training.
The novel contributions of this work are as follows. First, we provide a principled and practical generalization of structured cascades to intractable models. Second, we present generalization bounds
on the performance of the ensemble. Third, we introduce a challenging VideoPose dataset, culled
from TV videos, for evaluating pose estimation and tracking. Finally, we present an evaluation
of our approach on synthetic data and the VideoPose dataset. We find that our joint training of an
ensemble method outperforms several competing baselines on this difficult tracking problem.
2
Structured Cascades
Given an input space X , output space Y, and a training set { x1 , y 1 , . . . , hxn , y n i} of n samples
from a joint distribution D(X, Y ), the standard supervised learning task is to learn a hypothesis
h : X 7? Y that minimizes the expected loss ED [L (h(x), y)] for some non-negative loss function
L : Y ?Y ? R+ . In structured prediction problems, Y is a `-vector of variables and Y = Y1 ?? ? ??
Y` , and Yi = {1, . . . , K}. In many settings, the number of random variables, `, differs depending
on input X, but for simplicity of notation, we assume a fixed ` here. The linear hypothesis class we
consider is of the form h(x) = argmaxy?Y ?(x, y), where the scoring function ?(x, y) , ?> f (x, y)
is the inner product of a vector of parameters ? and a feature function f : X ? Y 7? Rd mapping
(x, y) pairs to a set of d real-valued features. We further
assume that f decomposes over a set of
P
cliques C over inputs and outputs, so that ?(x, y) = c?C ?> fc (x, yc ). Above, yc is an assignment
2
to the subset of Y variables in the clique c and we will use Yc to refer to the set of all assignments
to the clique. By considering different cliques over X and Y , f can represent arbitrary interactions
between the components of x and y. Evaluating h(x) is tractable for low-treewidth (hyper)graphs
but is NP-hard in general, and typically, approximate inference is used when features are not lowtreewidth.
In our prior work [4], we introduced the framework of Structured Prediction Cascades (SPC) to
handle problems with low-treewidth T but large node state-space K, which makes complexity of
O(K T ) prohibitive. For example, for a 5-th order linear chain model for handwriting recognition or
part-of-speech tagging, K is about 50, and exact inference is on the order 506 ? 15 billion times
the length the sequence. In tree-structured models we have used for for human pose estimation [6],
typical K for each part includes image location and orientation and is on the order of 250, 000, so
even K 2 in pairwise potentials is prohibitive. Rather than learning a single monolithic model, a
structured cascade is a coarse-to-fine sequence of increasingly complex models, where model complexity scales with Markov order in sequence models or spatial/angular resolution in pose models,
for example. The goal of each model is to filter out a large subset of assignments without eliminating
the correct one, so that the next level only has to consider a much reduced state-space. The filtering
process is feed-forward, and each stage uses inference to compute max-marginals which are used
to eliminate low-scoring node or clique assignments. The parameters of each model in the cascade
are learned using a loss function which balances accuracy (not eliminating correct assignment) and
efficiency (eliminating as many other assignments as possible).
More precisely, for each clique assignment yc , there is a max marginal ?? (x, yc ), defined as the
maximum score of any output y that contains the clique assignment yc :
?? (x, yc ) , max
{?(x, y 0 ) : yc0 = yc }.
0
y ?Y
(1)
For simplicitly, we will examine the case where the cliques that we filter are defined only over single
variables: yc = yj (although the model may also contain larger cliques). Clique assignments are
filtered by discarding any yj for which ?? (x, yj ) ? t(x) for a threshold t(x). We define Yj to be
the set of possible states for the j?th variable. The threshold proposed in [4] is a ?max mean-max?
function,
`
X
X
1
t(x, ?) = ??? (x) + (1 ? ?) P`
?? (x, yj ).
(2)
j=1 |Yj | j=1 yj ?Yj
Filtering max marginals in this fashion can be learned because of the ?safe filtering? property: ensuring that ?(xi , y i ) > t(xi , ?) is sufficient (although not necessary) to guarantee that no marginal
consistent with the true answer y i will be filtered. Thus, for fixed ?, [4] proposed learning parameters ? to maximize the margin ?(xi , y i ) ? t(xi , ?) and therefore minimize filtering errors:
?
1X i
inf ||?||2 +
?
s.t. ?(xi , y i ) ? t(xi , ?) + `i ? ? i , ?i = 1, . . . , n
(3)
?,??0 2
n i
Above, ? i are slack variables for the margin constraints, and `i is the size of the i?th example.
3
Structured Ensemble Cascades
In this work, we tackle the problem of learning a structured cascade for problems in which inference
is intractable, but in which the large node state-space has a natural hierarchy that can be exploited.
For example, such hierarchies arise in pose estimation by discretizing the articulation of joints at
multiple resolutions, or in image segmentation due to the semantic relationship between class labels
(e.g., ?grass? and ?tree? can be grouped as ?plants,? ?horse? and ?cow? can be grouped as ?animal.?)
Although the methods discussed in this section can be applied to more general intractable settings,
and our prior work considered more general cascades that operate on graph cliques, we will assume
for simplicitly that the structured cascades operate in a ?node-centric? coarse-to-fine manner as follows. For each variable yj in the model, each level of the cascade filters a current set of possible
states Yj , and any surviving states are passed forward to the next level of the cascade by substituting
each state with its set of descendents in the hierarchy. Thus, in the pose estimation problem, surviving states are subdivided into multiple finer-resolution states; in the image segmentation problem,
broader object classes are split into their constituent classes for the next level.
3
We propose a novel method for learning structured cascades when inference is intractable due to
loops in the graphical structure. The key idea of our approach is to decompose the loopy model into
a collection of equivalent tractable sub-models for which inference is tractable. What distinguishes
our approach from other decomposition based methods (e.g., [5, 7]) is that, because the cascade?s
objective is filtering and not decoding, our approach does not require enforcing the constraint that the
sub-models agree on which output has maximum score. We call our approach structured ensemble
cascades.
3.1
Decomposition without agreement constraints
Given a loopy (intractable) graphical model, it is always possible to express the score of a given
output ?(x, y) as the sum of P scores
P ?p (x, y) under sub-models that collectively cover every edge
in the loopy model: ?(x, y) =
p ?p (x, y). (See Figures 2 & 3 for illustrations specific to the
experiments presented in this paper.) For example, in the method of dual decomposition [5], it is
possible to solve a relaxed MAP problem in the (intractable) full model by running inference in
the (tractable) sub-models under the constraint that all sub-models agree on the argmax solution.
Enforcing this constraint requires iteratively re-weighting unary potentials of the sub-models and
repeatedly re-running inference until each sub-model convergences to the same argmax solution.
However, for the purposes of a structured cascade, we are only interested in computing the max
marginals ?? (x, yj ). In other words, we are only interested in knowing whether or not a configuration y consistent with yj that scores highly in each sub-model ?p (x, y) exists. We show in the
remainder of this section that the requirement that a single y consistent with yj optimizes the score
of each submodel (i.e, that all sub-models agree) is not necessary for the purposes of filtering. Thus,
because we do not have to enforce agreement between sub-models, we can learn a structured cascade for intractable models, but pay only a linear (factor of P ) increase in inference time over the
tractable sub-models.
Formally,
we define a single level of the ensemble cascade as a set of P models such that ?(x, y) =
P
?
?
p ?p (x, y). We let ?p (x, ?), ?p (x, ?), ?p (x) and tp (x, ?) be the score, max marginal, max score,
and threshold of the p?th model, respectively. We define the argmax marginal or witness yp? (x, yj )
to be the maximizing complete assignment of the corresponding max marginal ?p? (x, yj ). Then, if
y = yp? (x, yj ) is the same for each of the p?th submodels, we have that
X
?? (x, yj ) =
?p? (x, yj )
(4)
p
Note that if we do not require the sub-models to agree, then ?? (x, yj ) P
is stricly less than
P
?
?
?
p ?p (x, yj ). Nonetheless, as we show next, the approximation ? (x, yj ) ?
p ?p (x, yj ) is still
useful and sufficient for filtering in a structured cascade.
3.2
Safe filtering and generalization error
We first show that if a given label y has a high score in the full model, it must also have a large
ensemble max marginal score, even if the sub-models do not agree on the argmax. This results in a
?safe filtering? lemma similar to that given in [4], as follows:
P
P
Lemma 1 (Joint Safe Filtering). If p ?p (x, y) > t, then p ?p? (x, yj ) > t for all yj ? y.
Proof. In English, this lemma states that if the global score is above a given threshold, then the
sum of sub-model max-marginals is also above threshold (with no agreement constraint). The
proof
For any yj consistent with y, we have ?p? (x, yj ) ? ?p (x, y). Therefore
P ?is straightforward.
P
p ?p (x, yj ) ?
p ?p (x, y) > t.
Therefore, we see that anPagreement constraint is not necessary in order to filter safely: if we ensure
that the combined score p ?p (x, y) of the true label y is above threshold, then we can filter without
making a mistake if we compute max marginals by running inference separately for each sub-model.
However, there is still potentially a price to pay for disagreement. If the sub-models do not agree,
and the truth is not above threshold, then the threshold may filter all of the states for a given variable
4
yj and therefore ?break? the cascade. This results from the fact that without agreement, there is
no single argmax output y ? that is always above threshold for any ?; therefore, it is not guaranteed
that there exists an output y to satisfy the Joint Safe Filtering Lemma. However, we note that in
our experiments, we never experienced such breakdown of the cascades due to overly aggressive
filtering.
In order to learn parameters that are useful for filtering, Lemma 1 suggests a natural ensemble
filtering loss, which we define for any fixed ? as follows,
"
#
X
X
Ljoint (?, hx, yi) = 1
?p (x, y) ?
tp (x, ?) ,
(5)
p
p
where ? = {?1 , . . . , ?P } is the set of all parameters of the ensemble. (Note that this loss function is
somewhat conservative because it measures whether or not a sufficient but not necessary condition
for a filtering error has occured.)
To conclude this section, we provide a generalization bound on the ensemble filtering loss, equivalent to the bounds in [4] for the single-model cascades. To do so, we first eliminate the dependence
on x and ? by rewriting Ljoint in terms of the scores of every possible state assignment, ? ? f (x, yj ),
according to each sub-model. Let the vector ?x ? RmP denote these scores, where m is the number
of possible state assignments in the sub-models.
Theorem
P 1. For any fixed ? ? [0, 1), define the dominating cost function ?(y, ?x ) =
r? (1/P p ?p (x, y) ? tp (x, ?)), where r? (?) is the ramp function with slope ?. Let ||?p ||2 ? F for
all p, and ||f (x, yj )||2 ? 1 for all x and yj . Then there exists a constant C such that for any integer
n and any 0 < ? < 1 with probability 1 ? ? over samples of size n, every ? = {?1 , . . . , ?P } satisfies:
r
?
Cm
`F
P
8 ln(2/?)
? [?(Y, ?x )] +
?
E [Ljoint (Y, ?x )] ? E
+
,
(6)
n
? n
? is the empirical expectation with respect to training data.
where E
The proof is given in the supplemental materials.
3.3
Parameter estimation with gradient descent
In this section we now discuss how to minimize the loss (5) given a dataset. We rephrase the SC
optimization problem (3) using the ensemble max-marginals to form the ensemble cascade learning
problem,
X
X
1X i
?X
||?p ||2 +
?
s.t.
?p (xi , y i ) ?
tp (xi , ?) + `i ? ? i , (7)
inf
?1 ,...,?P ,??0 2
n
p
p
p
i
P
P
i
i
i i
i
Seeing that the constraints can be ordered to show ? ?
p tp (x , ?) ?
p ?p (x , y ) + ` , we
can form an equivalent unconstrained minimization problem and take the subgradient of (7) with
respect to each parameter ?p . This yields the following update rule for the p?th model:
(
P
P
0
if p ?p (xi , y i ) ? p tp (xi , ?) + `i ,
?p ? (1 ? ?)?p +
(8)
??p (xi , y i ) ? ?tp (xi , ?) otherwise.
This update is identical to the original SC update with the exception that we update each model
individually only when the ensemble has made a mistake jointly. Thus, learning to filter with the
ensemble requires only P times as many resources as learning to filter with any of the models
individually.
4
Experiments
We evaluated structured ensemble cascades in two experiments. First, we analyzed the ?best-case?
filtering performance of the summed max-marginal approximation to the true marginals on a synthetic image segmentation task, assuming the true scoring function ?(x, y) is available for inference.
Second, we evaluated the real-world accuracy of our approach on a difficult, real-world human pose
dataset (VideoPose). In both experiments, the max-marginal ensemble outperforms state-of-the-art
baselines.
5
?(x, y)
=
?1 (x, y)
+
?2 (x, y)
+
?3 (x, y)
?4 (x, y)
+
?5 (x, y)
+
?6 (x, y)
(a)
+
(b)
Figure 2: (a) Example decomposition of a 3 ? 3 fully connected grid into all six constituent ?comb? trees. In
general, a n ? n grid yields 2n such trees. (b) Improvement over Loopy BP and constituent tree-models on the
synthetic segmentation task. Error bars show standard error.
4.1
Asymptotic Filtering Accuracy
We first evaluated the filtering accuracy of the max-marginal ensemble on a synthetic 8-class segmentation task. For this experiment, we removed variability due to parameter estimation and focused
our analysis on accuracy of inference. We compared our approach to Loopy Belief Propagation
(Loopy BP) [8], a state-of-the-art method for approximate inference, on a 11 ? 11 two-dimensional
grid MRF.? For the ensemble, we used 22 unique ?comb? tree structures to approximate the full grid
model (i.e. Figure 2(a)). To generate a synthetic instance, we generated unary potentials ?i (k) uniformly on [0, 1] and pairwise potentials log-uniformly: ?ij (k, k 0 ) = exp ?v, where v ? U[?25, 25]
was sampled independently for every edge and every pair of classes. (Note that for the ensemble,
we normalized unary and edge potentials by dividing by the number of times that each potential was
included in any model.) It is well known that inference for such grid MRFs is extremely difficult
[8], and we observed that Loopy BP failed to converge for at least a few variables on most examples
we generated.
Ensemble outperforms Loopy BP. We evaluted our approach on 100 synthetic grid MRF instances. For each instance, we computed the accuracy of filtering using marginals from Loopy BP,
the ensemble, and each individual sub-model. We determined error rates by counting the number of
times ?ground truth? was incorrectly filtered if the top K states were kept for each variable, where
we sampled 1000 ?ground truth? examples from the true joint distribution using Gibbs sampling.
To obtain a good estimate of the true marginals, we restarted the chain for each sample and allowed
1000 iterations of mixing time. The result is presented in Figure 2(b) for all possible values of
K (filter aggressiveness.) We found that the ensemble outperformed Loopy BP and the individual
sub-models by a significant margin for all K.
Effect of sub-model agreement. We next investigated the question of whether or not the ensembles were most accurate on variables for which the sub-models tended to agree. For each variable
yij in each instance, we computed the mean pairwise Spearman correlation between the ranking of
the 8 classes induced by the max marginals of each of the 22 sub-models. We found that complete
agreement between all sub-models never occured (the median correlation was 0.38). We found that
sub-model agreement was significantly correlated (p < 10?15 ) with the error of the ensemble for all
values of K, peaking at ? = ?0.143 at K = 5. Thus, increased agreement predicted a decrease in
error of the ensemble. We then asked the question: Does the effect of model agreement explain the
improvement of the ensemble over Loopy BP? In fact, the improvement in error compared to Loopy
BP was not correlated with sub-model agreement for any K (maximum ? = 0.0185, p < 0.05).
Thus, sub-model agreement does not explain the improvement over Loopy BP, indicating that submodel disagreement is not related to the difficulty in inference problems that causes Loopy BP to
underperform relative to the ensembles (e.g., due to convergence failure.)
?
We used the UGM Matlab Toolbox by Mark Schmidt for the Loopy BP and Gibbs MCMC sections of this experiment.
http://people.cs.ubc.ca/ schmidtm/Software/UGM.html
6
Publicly available at:
(a) Decoding Error.
(b) Top K = 4 Error.
State
P CP0.25
Efficiency
Level
Dimensions
in top K=4
(%)
0
2
4
6
10 ? 10 ? 24
20 ? 20 ? 24
40 ? 40 ? 24
80 ? 80 ? 24
?
98.8
93.8
84.6
?
87.5
96.9
99.2
(c) Ensemble efficiency.
Figure 3: (a),(b): Prediction error for VideoPose dataset. Reported errors are the average distance from a
predicted joint location to the true joint for frames that lie in the [25,75] inter-quartile range (IQR) of errors.
Error bars show standard errors computed with respect to clips. All SC models outperform [9]; the ?torso
only? persistence cascade introduces additional error compared to a single-frame cascade, but adding arm
dependencies in the ensemble yields the best performance. (c): Summary of test set filtering efficiency and
accuracy for the ensemble cascade. P CP0.25 measures Oracle % of correctly matched limb locations given
unfiltered states; see [6] for more details.
4.2
The VideoPose Dataset
Our dataset consists of 34 video clips of approximately 50 frames each. The clips were harvested
from three popular TV shows: 3 from Buffy the Vampire Slayer, 27 from Friends, and 4 from
LOST. Clips were chosen to highlight a variety of situations and and movements when the camera is
largely focused on a single actor. In our experiments, we use the Buffy and half of the Friends clips
as training (17 clips), and the remaining Friends and LOST clips for testing. In total we test on 901
individual frames. The Friends are split so no clips from the same episode are used for both training
and testing. We further set aside 4 of the Friends test clips to use as a development set. Each frame
of each clip is hand-annotated with locations of joints of a full pose model: torso, upper/lower arms
for both right and left, and top and bottom of head. For each joint, a binary tag indicating whether
or not the joint is occluded is also included, to be used in future research.? For simplicity, we use
only the torso and upper arm annotations in this work, as these have the strongest continuity across
frames and strong geometric relationships.
Articulated pose model. All of the models we evaluated on this dataset share the same basic
structure: a variable for each limb?s (x, y) location and angle rotation (torso, left arm, and right arm)
with edges between torso and arms to model pose geometry. We refer to this basic model, evaluated
independently on each frame, as the ?Single Frame? approach. For the VideoPose dataset, we augmented this model by adding edges between limb states in adjacent frames (Figure 1), forming an
intractable, loopy model. Features: Our features in a single frame are the same as in the beginning
levels of the pictorial structure cascade from [6]: unary features are discretized Histogram of Gradient part detectors scores, and pairwise terms measure relative displacement in location and angle
between neighboring parts. Pairwise features connecting limbs across time also express geometric
displacement, allowing our model to capture the fact that human limbs move smoothly over time.
Coarse-to-Fine Ensemble Cascade. We learned a coarse-to-fine structured cascade with six levels for tracking as follows. The six levels use increasingly finer state spaces for joint locations,
discretized into bins of resolution 10 ? 10 up to 80 ? 80, with each stage doubling one of the state
space dimensions in the refinement step. All levels use an angular discretization of 24 bins. For
the ensemble cascade, we learned three sub-models simultaneously (Figure 1), with each sub-model
accounting for temporal consistency for a different limb by adding edges connecting the same limb
in consecutive frames.
Experimental Comparison. A summary of results are presented in Figure 3. We compared the
single-frame cascade and the ensemble cascade to a state-of-the-art single-frame pose detector (Ferrari et al. [9]) and to one of the individual sub-models, modeling torso consistency only (?Torso
?
The VideoPose dataset is available online at http://vision.grasp.upenn.edu/video/.
7
Figure 4: Qualitative test results. Points shown are the position of left/right shoulders and torsos at the last
level of the ensemble SC (blue square, green dot, white circle resp.). Also shown (green line segments) are the
best-fitting hypotheses to groundtruth joints, selected from within the top 4 max-marginal values. Shown as
dotted gray lines is the best guess pose returned by the [9].
Only?). We evaluated the method from [9] on only the first half of the test data due to computation
time (taking approximately 7 minutes/frame). We found that the ensemble cascade was the most
accurate for every joint in the model, that all cascades outperformed the state-of-the-art baseline,
and, interestingly, that the single-frame cascade outperformed the torso-only cascade. We suspect
that the poor performance of the torso-only model may arise because propagating only torso states
through time leads to an over-reliance on the relatively weak torso signal to determine the location
of all the limbs. Sample qualitative output from the ensemble is presented in Figure 4.
5
Discussion
Related Work. Tracking with articulated body parts is challenging for two main reasons. First, body
parts are hard to detect in unconstrained environments due to the enormous variability in appearance
(from lighting, clothing and articulation) and occlusion. Second, the huge number of degrees of
freedom makes exact modeling of the problem computationally prohibitive. In light of these two
issues, many works focus on fixed-camera environments (e.g., [10, 11, 12]), some even assuming
sillhouettes can be obtained (e.g., [2]), or 3d information from multiple sensors ([13]). In choices of
modeling, past works reduce the large state space degrees of freedom by only modeling location and
scale, or resorting to sampling methods ([1, 14], or embedding into low-dimensional latent spaces
[10]. In contrast, in this work we learn to efficiently navigate an unconstrained state space in the
challenging setting of a single, non-fixed camera.
We adopt the same basic modeling structure as [15, 9, 16] in our work, but also model dependencies
through time. We also take a discriminative approach to training rather than generative. Ferrari et
al. [9] use loopy belief propagation to incorporate temporal consistency of parts, but to our knowledge we are the first to quantitatively evaluate on movie/TV show sequences.
In the method of dual decomposition [5], efficient optimization of a LP relaxation of MAP inference
in an intractable model is achieved by coupling the inference of a collection of tractable sub-models.
This coupling is achieved by repeatedly performing inference and updating a set of dual parameters
until convegence. In contrast, we perform inference independently in each sub-model only once,
and reason about individual variables using the sums of max-marginals.
Future Research. Several key questions remain as future directions of research. Although we
presented generalization bounds for the error of the cascade, such bounds are purely ?post-hoc.?
We are currently investigating a priori properties of or assumptions about the data and cascade that
will provably lead to efficient cascaded learning and inference. In the future, our approach on the
VideoPose dataset could be easily extended to model more limbs, additionally complex features in
time and geometry (e.g. [6]), and additional states such as occlusions. Successfully solving this
problem is necessary in order to understand the context and consequences of interactions between
actors in video; e.g., to be able to follow a pointing arm or to observe the transfer of an important
object from one person to another.
Acknowledgements
The authors were partially supported by NSF Grant 0803256 and ARL Cooperative Agreement W911NF-102-0016. David Weiss was also supported by a NSF Graduate Research Fellowship.
8
References
[1] L. Sigal, S. Bhatia, S. Roth, M.J. Black, and M. Isard. Tracking loose-limbed people. In Proc. CVPR,
2004.
[2] B. Wu and R. Nevatia. Detection and tracking of multiple, partially occluded humans by bayesian combination of edgelet based part detectors. IJCV, 75(2):247?266, 2007.
[3] J.D.J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost for image understanding: Multi-class
object recognition and segmentation by jointly modeling texture, layout, and context. IJCV, 81(1), January
2009.
[4] D. Weiss and B. Taskar. Structured prediction cascades. In Proc. AISTATS, 2010.
[5] N. Komodakis, N. Paragios, and G. Tziritas. MRF optimization via dual decomposition: Message-passing
revisited. In Proc. ICCV, 2007.
[6] B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In Proc. ECCV,
2010.
[7] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, second edition, 1999.
[8] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT Press,
2009.
[9] V. Ferrari, M. Marin-Jimenez, and A. Zisserman. Progressive search space reduction for human pose
estimation. In Proc. CVPR, 2008.
[10] M. Andriluka, S. Roth, and B. Schiele. People-tracking-by-detection and people-detection-by-tracking.
In Proc. CVPR, 2008.
[11] S. Pellegrini, A. Ess, K. Schindler, and L. Van Gool. Youll Never Walk Alone: Modeling Social Behavior
for Multi-target Tracking. In Proc. ICCV, 2009.
[12] L. Kratz and K. Nishino. Tracking with Local Spatio-Temporal Motion Patterns in Extremely Crowded
Scenes. In Proc. CVPR, 2010.
[13] R. Mu?noz-Salinas, E. Aguirre, and M. Garc??a-Silvente. People detection and tracking using stereo vision
and color. Image and Vision Computing, 25(6):995?1007, 2007.
[14] J. S. Kwon and K. M. Lee. Tracking of a non-rigid object via patch-based dynamic appearance modeling
and adaptive basin hopping monte carlo sampling. In Proc. CVPR, 2009.
[15] B. Sapp, C. Jordan, and B. Taskar. Adaptive pose priors for pictorial structures. In Proc. CVPR, 2010.
[16] M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated
pose estimation. In Proc. CVPR, 2009.
9
| 4134 |@word eliminating:3 underperform:1 decomposition:7 accounting:1 textonboost:1 reduction:1 configuration:1 contains:1 score:16 jimenez:1 interestingly:1 outperforms:4 past:1 current:1 discretization:1 yet:1 must:1 update:4 grass:1 aside:1 half:2 prohibitive:3 selected:1 guess:1 generative:1 isard:1 alone:1 beginning:1 es:1 filtered:5 coarse:5 node:4 location:9 revisited:2 qualitative:2 consists:1 ijcv:2 fitting:1 comb:2 manner:1 introduce:1 pairwise:5 inter:1 tagging:1 upenn:2 expected:1 behavior:1 examine:1 multi:2 discretized:2 decomposed:1 cp0:2 considering:1 estimating:1 notation:1 pof:1 matched:1 what:1 cm:1 minimizes:1 supplemental:1 guarantee:2 safely:2 temporal:4 every:6 tackle:1 grant:1 bertsekas:1 monolithic:1 local:1 mistake:2 consequence:1 marin:1 approximately:2 black:1 suggests:1 challenging:5 range:1 graduate:1 practical:1 unique:1 camera:3 yj:36 testing:2 lost:2 differs:1 convegence:1 displacement:2 empirical:1 cascade:52 significantly:2 persistence:1 word:1 seeing:1 context:2 equivalent:3 map:2 roth:3 maximizing:1 straightforward:1 layout:1 independently:3 convex:2 focused:2 resolution:6 simplicity:2 rule:1 submodel:2 embedding:1 handle:1 ferrari:3 justification:1 resp:1 target:2 hierarchy:4 exact:3 programming:1 us:1 hypothesis:4 agreement:13 pa:1 recognition:2 updating:1 breakdown:1 cooperative:1 observed:1 taskar:5 bottom:1 capture:1 connected:1 episode:1 decrease:1 removed:1 movement:1 principled:2 benjamin:1 environment:2 mu:1 complexity:4 schiele:2 asked:1 occluded:2 dynamic:2 solving:1 segment:1 purely:1 efficiency:6 easily:1 joint:16 articulated:7 monte:1 sc:4 labeling:1 horse:1 hyper:1 bhatia:1 refined:1 salina:1 larger:1 valued:1 solve:1 dominating:1 ramp:1 otherwise:1 cvpr:7 jointly:3 online:1 hoc:1 sequence:5 propose:4 interaction:3 product:1 remainder:1 neighboring:1 loop:1 mixing:1 constituent:7 billion:1 convergence:2 requirement:1 extending:1 produce:1 ben:1 object:5 depending:1 illustrate:1 derive:1 propagating:1 pose:19 friend:5 coupling:2 ij:1 strong:1 dividing:1 predicted:2 c:1 treewidth:5 arl:1 tziritas:1 direction:1 safe:5 correct:2 annotated:1 filter:12 stochastic:1 quartile:1 criminisi:1 human:7 aggressiveness:1 material:1 bin:2 garc:1 require:3 subdivided:1 hx:1 generalization:6 decompose:2 yij:1 clothing:1 considered:1 ground:2 exp:1 pellegrini:1 mapping:1 pointing:1 substituting:1 consecutive:2 adopt:1 purpose:2 estimation:12 proc:11 outperformed:3 label:3 currently:1 individually:2 grouped:2 successfully:1 minimization:1 mit:1 sensor:1 always:2 rather:2 avoid:1 broader:1 focus:2 improvement:4 contrast:2 baseline:3 detect:1 inference:31 mrfs:1 rigid:1 unary:4 typically:3 eliminate:2 koller:1 interested:2 provably:2 pixel:1 issue:1 dual:5 orientation:1 html:1 priori:1 development:1 animal:1 andriluka:2 art:5 summed:2 spatial:2 spc:1 marginal:12 once:1 never:3 sampling:4 identical:1 progressive:1 ugm:2 future:4 np:1 quantitatively:1 few:1 distinguishes:1 kwon:1 simultaneously:1 individual:5 pictorial:3 argmax:5 geometry:2 occlusion:2 maintain:1 friedman:1 freedom:3 detection:5 huge:1 message:1 highly:1 evaluation:1 grasp:1 introduces:1 argmaxy:1 analyzed:1 light:1 chain:2 accurate:2 edge:6 necessary:5 tree:7 walk:1 desired:2 re:2 circle:1 theoretical:1 instance:4 increased:1 modeling:8 cover:1 tp:7 w911nf:1 assignment:13 loopy:18 cost:1 subset:2 reported:1 dependency:3 answer:1 synthetic:8 combined:1 person:1 probabilistic:1 lee:1 decoding:2 ym:1 connecting:2 nevatia:1 yp:2 aggressive:1 potential:6 includes:2 crowded:1 descendent:1 satisfy:1 ranking:1 depends:1 break:1 annotation:1 slope:1 contribution:1 minimize:2 square:1 publicly:1 accuracy:9 largely:1 efficiently:1 ensemble:46 yield:3 weak:1 bayesian:1 carlo:1 lighting:1 finer:2 explain:2 strongest:1 detector:3 tended:1 ed:1 failure:1 nonetheless:1 proof:3 handwriting:1 sampled:2 dataset:11 popular:1 knowledge:1 color:1 sapp:3 occured:2 segmentation:6 torso:12 centric:1 feed:1 supervised:1 follow:1 zisserman:1 wei:3 evaluated:6 angular:2 stage:2 until:2 correlation:2 hand:1 nonlinear:1 propagation:3 continuity:1 schmidtm:1 gray:1 scientific:1 usa:1 effect:2 contain:1 true:7 normalized:1 iteratively:1 satisfactory:1 semantic:2 white:1 adjacent:1 komodakis:1 simplicitly:2 complete:2 motion:1 image:6 variational:2 novel:4 rotation:1 overview:1 extend:1 discussed:1 rmp:1 marginals:14 refer:2 significant:1 gibbs:2 rd:1 unconstrained:3 grid:7 consistency:3 resorting:1 dot:1 actor:2 inf:2 optimizes:1 discretizing:1 binary:1 yi:2 exploited:1 scoring:4 additional:2 relaxed:1 somewhat:1 converge:1 maximize:1 determine:1 signal:1 multiple:6 full:7 stem:1 iqr:1 post:1 equally:1 schematic:1 prediction:7 ensuring:1 basic:4 mrf:3 vision:4 expectation:1 iteration:1 represent:1 adopting:1 histogram:1 achieved:2 fellowship:1 fine:6 separately:1 winn:1 median:1 operate:2 unlike:2 induced:1 suspect:1 jordan:1 surviving:2 call:1 integer:1 counting:1 edgelet:1 split:2 shotton:1 variety:1 pennsylvania:1 topology:1 competing:1 cow:1 inner:1 idea:2 tm:2 knowing:1 reduce:1 whether:4 six:3 hxn:1 passed:2 stereo:1 returned:1 speech:1 passing:1 cause:1 repeatedly:2 matlab:1 generally:1 useful:2 clip:10 category:1 reduced:1 generate:1 http:2 outperform:1 nsf:2 dotted:1 estimated:1 overly:1 correctly:1 blue:1 sidestepping:2 express:2 key:2 reliance:1 threshold:9 enormous:1 schindler:1 rewriting:1 thresholded:1 kept:1 graph:2 subgradient:2 relaxation:1 sum:6 limbed:1 run:1 angle:2 throughout:1 ljoint:3 submodels:2 groundtruth:1 wu:1 patch:1 decision:1 bound:6 pay:2 guaranteed:1 refine:1 oracle:1 occur:1 precisely:1 constraint:8 bp:11 scene:1 software:1 tag:1 toshev:1 extremely:2 performing:1 relatively:1 structured:26 tv:3 according:2 combination:1 poor:1 spearman:1 across:2 remain:1 increasingly:2 lp:1 making:1 peaking:1 iccv:2 handling:1 computationally:2 ln:1 agree:7 previously:1 resource:1 slack:1 discus:1 loose:1 tractable:9 available:3 apply:1 limb:9 observe:1 enforce:2 disagreement:2 schmidt:1 altogether:1 original:1 top:5 running:3 ensure:1 remaining:1 graphical:4 hopping:1 objective:1 move:1 question:3 primary:1 dependence:1 gradient:2 distance:1 athena:1 reason:2 enforcing:2 assuming:2 rother:1 length:1 relationship:2 illustration:1 minimizing:1 balance:2 culled:1 difficult:3 potentially:1 negative:1 contributed:1 allowing:1 upper:2 perform:1 markov:1 descent:1 incorrectly:1 january:1 situation:1 witness:1 variability:2 head:1 shoulder:1 y1:1 frame:15 extended:1 arbitrary:1 david:2 introduced:1 pair:2 toolbox:1 optimized:1 rephrase:1 learned:4 uncoupled:1 address:1 able:1 bar:2 pattern:1 yc:9 articulation:2 max:25 including:1 video:6 belief:3 green:2 gool:1 natural:2 difficulty:1 cascaded:2 stricly:1 arm:7 movie:1 philadelphia:1 speeding:1 prior:3 geometric:2 acknowledgement:1 understanding:1 asymptotic:1 relative:2 loss:10 plant:1 fully:1 harvested:1 highlight:1 filtering:22 unfiltered:1 degree:3 sufficient:3 consistent:4 basin:1 thresholding:3 sigal:1 principle:1 share:1 eccv:1 summary:2 supported:2 last:1 english:1 understand:1 noz:1 taking:1 sparse:3 van:1 dimension:2 evaluating:2 world:2 author:2 forward:2 refinement:3 collection:2 made:1 adaptive:2 social:1 approximate:4 nishino:1 clique:11 global:1 investigating:1 assumed:1 conclude:1 evaluted:1 xi:12 discriminative:1 spatio:1 search:1 latent:1 decomposes:1 additionally:1 learn:5 transfer:1 ca:1 yc0:1 investigated:1 complex:4 aistats:1 main:1 arise:3 edition:1 repeated:1 allowed:1 body:3 x1:1 augmented:1 fashion:1 sub:42 experienced:1 position:1 paragios:1 lie:1 third:1 weighting:1 theorem:1 minute:1 discarding:1 specific:1 navigate:1 showing:1 aguirre:1 intractable:14 exists:3 adding:4 ci:1 texture:1 margin:3 smoothly:1 fc:1 simply:1 appearance:2 forming:1 failed:1 ordered:1 tracking:16 partially:2 doubling:1 collectively:1 restarted:1 ubc:1 truth:3 satisfies:1 vampire:1 goal:1 buffy:2 price:1 hard:2 included:2 typical:2 determined:1 uniformly:2 lemma:5 conservative:1 called:1 total:1 experimental:1 exception:1 formally:1 indicating:2 mark:1 people:6 arises:1 incorporate:1 evaluate:2 mcmc:1 correlated:2 |
3,462 | 4,135 | A Theory of Multiclass Boosting
Indraneel Mukherjee
Robert E. Schapire
Princeton University, Department of Computer Science, Princeton, NJ 08540
{imukherj,schapire}@cs.princeton.edu
Abstract
Boosting combines weak classifiers to form highly accurate predictors. Although
the case of binary classification is well understood, in the multiclass setting, the
?correct? requirements on the weak classifier, or the notion of the most efficient
boosting algorithms are missing. In this paper, we create a broad and general
framework, within which we make precise and identify the optimal requirements
on the weak-classifier, as well as design the most effective, in a certain sense,
boosting algorithms that assume such requirements.
1
Introduction
Boosting [17] refers to a general technique of combining rules of thumb, or weak classifiers, to form
highly accurate combined classifiers. Minimal demands are placed on the weak classifiers, so that a
variety of learning algorithms, also called weak-learners, can be employed to discover these simple
rules, making the algorithm widely applicable. The theory of boosting is well-developed for the case
of binary classification. In particular, the exact requirements on the weak classifiers in this setting
are known: any algorithm that predicts better than random on any distribution over the training set
is said to satisfy the weak learning assumption. Further, boosting algorithms that minimize loss as
efficiently as possible have been designed. Specifically, it is known that the Boost-by-majority [6]
algorithm is optimal in a certain sense, and that AdaBoost [11] is a practical approximation.
Such an understanding would be desirable in the multiclass setting as well, since many natural classification problems involve more than two labels, e.g. recognizing a digit from its image, natural
language processing tasks such as part-of-speech tagging, and object recognition in vision. However, for such multiclass problems, a complete theoretical understanding of boosting is lacking. In
particular, we do not know the ?correct? way to define the requirements on the weak classifiers, nor
has the notion of optimal boosting been explored in the multiclass setting.
Straightforward extensions of the binary weak-learning condition to multiclass do not work. Requiring less error than random guessing on every distribution, as in the binary case, turns out to be too
weak for boosting to be possible when there are more than two labels. On the other hand, requiring
more than 50% accuracy even when the number of labels is much larger than two is too stringent,
and simple weak classifiers like decision stumps fail to meet this criterion, even though they often
can be combined to produce highly accurate classifiers [9]. The most common approaches so far
have relied on reductions to binary classification [2], but it is hardly clear that the weak-learning
conditions implicitly assumed by such reductions are the most appropriate.
The purpose of a weak-learning condition is to clarify the goal of the weak-learner, thus aiding in
its design, while providing a specific minimal guarantee on performance that can be exploited by a
boosting algorithm. These considerations may significantly impact learning and generalization because knowing the correct weak-learning conditions might allow the use of simpler weak classifiers,
which in turn can help prevent overfitting. Furthermore, boosting algorithms that more efficiently
and effectively minimize training error may prevent underfitting, which can also be important.
In this paper, we create a broad and general framework for studying multiclass boosting that formalizes the interaction between the boosting algorithm and the weak-learner. Unlike much, but not all,
of the previous work on multiclass boosting, we focus specifically on the most natural, and perhaps
1
weakest, case in which the weak classifiers are genuine classifiers in the sense of predicting a single
multiclass label for each instance. Our new framework allows us to express a range of weak-learning
conditions, both new ones and most of the ones that had previously been assumed (often only implicitly). Within this formalism, we can also now finally make precise what is meant by correct
weak-learning conditions that are neither too weak nor too strong.
We focus particularly on a family of novel weak-learning conditions that have an especially appealing form: like the binary conditions, they require performance that is only slightly better than
random guessing, though with respect to performance measures that are more general than ordinary
classification error. We introduce a whole family of such conditions since there are many ways of
randomly guessing on more than two labels, a key difference between the binary and multiclass settings. Although these conditions impose seemingly mild demands on the weak-learner, we show that
each one of them is powerful enough to guarantee boostability, meaning that some combination of
the weak classifiers has high accuracy. And while no individual member of the family is necessary
for boostability, we also show that the entire family taken together is necessary in the sense that for
every boostable learning problem, there exists one member of the family that is satisfied. Thus, we
have identified a family of conditions which, as a whole, is necessary and sufficient for multiclass
boosting. Moreover, we can combine the entire family into a single weak-learning condition that is
necessary and sufficient by taking a kind of union, or logical OR, of all the members. This combined
condition can also be expressed in our framework.
With this understanding, we are able to characterize previously studied weak-learning conditions. In
particular, the condition implicitly used by AdaBoost.MH [19], which is based on a one-against-all
reduction to binary, turns out to be strictly stronger than necessary for boostability. This also applies
to AdaBoost.M1 [9], the most direct generalization of AdaBoost to multiclass, whose conditions
can be shown to be equivalent to those of AdaBoost.MH in our setting. On the other hand, the
condition implicit to Zhu et al.?s SAMME algorithm [21] is too weak in the sense that even when the
condition is satisfied, no boosting algorithm can guarantee to drive down the training error. Finally,
the condition implicit to AdaBoost.MR [19, 9] (also called AdaBoost.M2) turns out to be exactly
necessary and sufficient for boostability.
Employing proper weak-learning conditions is important, but we also need boosting algorithms that
can exploit these conditions to effectively drive down error. For a given weak-learning condition,
the boosting algorithm that drives down training error most efficiently in our framework can be
understood as the optimal strategy for playing a certain two-player game. These games are nontrivial to analyze. However, using the powerful machinery of drifting games [8, 16], we are able to
compute the optimal strategy for the games arising out of each weak-learning condition in the family
described above. These optimal strategies have a natural interpretation in terms of random walks, a
phenomenon that has been observed in other settings [1, 6].
Our focus in this paper is only on minimizing training error, which, for the algorithms we derive,
provably decreases exponentially fast with the number of rounds of boosting. Such results can be
used in turn to derive bounds on the generalization error using standard techniques that have been
applied to other boosting algorithms [18, 11, 13]. (We omit these due to lack of space.)
The game-theoretic strategies are non-adaptive in that they presume prior knowledge about the edge,
that is, how much better than random are the weak classifiers. Algorithms that are adaptive, such as
AdaBoost, are much more practical because they do not require such prior information. We show
therefore how to derive an adaptive boosting algorithm by modifying one of the game-theoretic
strategies.
We present experiments aimed at testing the efficacy of the new methods when working with a very
weak weak-learner to check that the conditions we have identified are indeed weaker than others that
had previously been used. We find that our new adaptive strategy achieves low test error compared
to other multiclass boosting algorithms which usually heavily underfit. This validates the potential
practical benefit of a better theoretical understanding of multiclass boosting.
Previous work. The first boosting algorithms were given by Schapire [15] and Freund [6], followed
by their AdaBoost algorithm [11]. Multiclass boosting techniques include AdaBoost.M1 and AdaBoost.M2 [11], as well as AdaBoost.MH and AdaBoost.MR [19]. Other approaches include [5, 21].
There are also more general approaches that can be applied to boosting including [2, 3, 4, 12]. Two
game-theoretic perspectives have been applied to boosting. The first one [10, 14] views the weak-
2
learning condition as a minimax game, while drifting games [16, 6] were designed to analyze the
most efficient boosting algorithms. These games have been further analyzed in the multiclass and
continuous time setting in [8].
2
Framework
We introduce some notation. Unless otherwise stated, matrices will be denoted by bold capital letters
like M, and vectors by bold small letters like v. Entries of a matrix and vector will be denoted as
M (i, j) or v(i), while M(i) will denote the ith row of a matrix. Inner product of two vectors u, v
is denoted by hu, vi. The Frobenius inner product of two matrices Tr(MM0 ) will be denoted by
M ? M0 . The indicator function is denoted by 1 [?]. The distribution over the set {1, . . . , k} will be
denoted by ? {1, . . . , k}.
In multiclass classification, we want to predict the labels of examples lying in some set X. Each
example x ? X has a unique y label in the set {1, . . . , k}, where k ? 2. We are provided a training
set of labeled examples {(x1 , y1 ), . . . , (xm , ym )}.
Boosting combines several mildly powerful predictors, called weak classifiers, to form a highly
accurate combined classifier, and has been previously applied for multiclass classification. In this
paper, we only allow weak classifier that predict a single class for each example. This is appealing,
since the combined classifier has the same form, although it differs from what has been used in much
previous work.
We adopt a game-theoretic view of boosting. A game is played between two players, Booster and
Weak-Learner, for a fixed number of rounds T . With binary labels, Booster outputs a distribution
in each round, and Weak-Learner returns a weak classifier achieving more than 50% accuracy on
that distribution. The multiclass game is an extension of the binary game. In particular, in each
round t: (1) Booster creates a cost-matrix Ct ? Rm?k , specifying to Weak-Learner that the cost
of classifying example xi as l is C(i, l). The cost-matrix may not be arbitrary, but should conform
to certain restrictions as discussed below. (2) Weak-Learner returns some weakP
classifier ht : X ?
m
{1, . . . , k} from a fixed space ht ? H so that the cost incurred is Ct ? 1ht = i=1 Ct (i, ht (xi )),
is ?small enough?, according to some conditions discussed below. Here by 1h we mean the m ? k
matrix whose (i, j)-th entry is 1 [h(i) = j]. (3) Booster computes a weight ?t for the current weak
classifier based on how much cost was incurred in this round.
At the end, Booster predicts according to the weighted plurality vote of the classifiers returned in
each round:
T
X
M
M
1 [ht (x) = l] ?t .
(1)
H(x) = argmax fT (x, l), where fT (x, l) =
l?{1,...,k}
t=1
By carefully choosing the cost matrices in each round, Booster aims to minimize the training error
of the final classifer H, even when Weak-Learner is adversarial. The restrictions on cost-matrices
created by Booster, and the maximum cost Weak-Learner can suffer in each round, together define
the weak-learning condition being used. For binary labels, the traditional weak-learning condition
states: for any non-negative weights w(1),
P . . . , w(m) on the training set, the error of the weak
classfier returned is at most (1/2 ? ?/2) i wi . Here ? parametrizes the condition. There are many
ways to translate this condition into our language. The one with fewest restrictions on the costmatrices requires labeling correctly should be less costly than labeling incorrectly: ?i : C(i, yi ) ?
C(i, y?i ), while
on the
returned
weak classifier h
requires less cost than predicting
P the restriction P
randomly: i C(i, h(xi )) ? i 12 ? ?2 C(i, y?i ) + 21 + ?2 C(i, yi ) . By the correspondence
w(i) = C(i, y?i ) ? C(i, yi ), we may verify the two conditions are the same.
We will rewrite this condition after making some simplifying assumptions. Henceforth, without
loss of generality, we assume that the true label is always 1. Let C bin ? Rm?2 consist of matrices
m?2
C which satisfy C(i, 1) ? C(i, 2). Further, let Ubin
be the matrix whose each row is
? ? R
(1/2 + ?/2, 1/2 ? ?/2). Then, Weak-Learner searching
space H satisfies the binary weak-learning
condition if: ?C ? C bin , ?h ? H : C ? 1h ? Ubin
? 0. There are two main benefits to this refor?
mulation. With linear homogeneous constraints, the mathematics is simplified, as will be apparent
later. More importantly, by varying the restrictions C bin on the cost vectors and the matrix Ubin , we
can generate a vast variety of weak-learning conditions for the multiclass setting k ? 2 as we now
show.
3
Let C ? Rm?k and matrix B ? Rm?k , which we call the baseline; we say a weak classifier space
H satisfies the condition (C, B) if
?C ? C, ?h ? H : C ? (1h ? B) ? 0,
i.e.,
m
X
c(i, h(i)) ?
i=1
m
X
hc(i), B(i)i .
(2)
i=1
In (2), the variable matrix C specifies how costly each misclassification is, while the baseline B
specifies a weight for each misclassification. The condition therefore states that a weak classifier should not exceed the average cost when weighted according to baseline B. This large class
of weak-learning conditions captures many previously used conditions, such as the ones used by
AdaBoost.M1 [9], AdaBoost.MH [19] and AdaBoost.MR [9, 19] (see below), as well as novel conditions introduced in the next section.
By studying this vast class of weak-learning conditions, we hope to find the one that will serve the
main purpose of the boosting game: finding a convex combination of weak classifiers that has zero
training error. For this to be possible, at the minimum the weak classifiers should be sufficiently rich
for such a perfect combination to exist. Formally, a collection H of weak classifiers is eligible for
boosting, or simply boostable,
P if there exists a distribution ? on this space that linearly separates the
data: ?i : argmaxl?{1,...,k} h?H ?(h)1 [h(xi ) = l] = yi . The weak-learning condition plays two
roles. It rejects spaces that are not boostable, and provides an algorithmic means of searching for the
right combination. Ideally, the second factor will not cause the weak-learning condition to impose
additional restrictions on the weak classifiers; in that case, the weak-learning condition is merely a
reformulation of being boostable that is more appropriate for deriving an algorithm. In general, it
could be too strong, i.e. certain boostable spaces will fail to satisfy the conditions. Or it could be too
weak i.e., non-boostable spaces might satisfy such a condition. Booster strategies relying on either
of these conditions will fail to drive down error; the former due to underfitting, and the latter due
to overfitting. In the next section we will describe conditions captured by our framework that avoid
being too weak or too strong.
3
Necessary and sufficient weak-learning conditions
The binary weak-learning condition has an appealing form: for any distribution over the examples,
the weak classifier needs to achieve error not greater than that of a random player who guesses
the correct answer with probability 1/2 + ?. Further, this is the weakest condition under which
boosting is possible as follows from a game-theoretic perspective [10, 14] . Multiclass weak-learning
conditions with similar properties are missing in the literature. In this section we show how our
framework captures such conditions.
In the multiclass setting, we model a random player as a baseline predictor B ? Rm?k whose rows
are distributions over the labels, B(i) ? ? {1, . . . , k}. The prediction on example i is a sample from
B(i). We only consider the space of edge-over-random baselines B?eor ? Rm?k who have a faint
clue about the correct answer. More precisely, any baseline B ? B?eor in this space is ? more likely
to predict the correct label than an incorrect one on every example i: ?l 6= 1, B(i, 1) ? B(i, l) + ?,
with equality holding for some l.
When k = 2, the space B?eor consists of the unique player Ubin
? , and the binary weak-learning
bin
bin
condition is given by (C , U? ). The new conditions generalize this to k > 2. In particular, define
C eor to be the multiclass extension of C bin : any cost-matrix in C eor should
put the least cost on the
correct label, i.e., the rows of the cost-matrices should come from the set c ? Rk : ?l, c(1) ? c(l) .
Then, for every baseline B ? B?eor , we introduce the condition (C eor , B), which we call an edgeover-random weak-learning condition. Since C ? B is the expected cost of the edge-over-random
baseline B on matrix C, the constraints (2) imposed by the new condition essentially require better
than random performance.
We now present the central results of this section. The seemingly mild edge-over-random conditions
guarantee eligibility, meaning weak classifiers that satisfy any one such condition can be combined
to form a highly accurate combined classifier.
Theorem 1 (Sufficiency). If a weak classifier space H satisfies a weak-learning condition (C eor , B),
for some B ? B?eor , then H is boostable.
4
The proof involves the Von-Neumann Minimax theorem, and is in the spirit of the ones in [10]. On
the other hand the family of such conditions, taken as a whole, is necessary for boostability in the
sense that every eligible space of weak classifiers satisfies some edge-over-random condition.
Theorem 2 (Relaxed necessity). For every boostable weak classifier space H, there exists a ? > 0
and B ? B?eor such that H satisfies the weak-learning condition (C eor , B).
The proof shows existence through non-constructive averaging arguments. Theorem 2 states that
any boostable weak classifier space will satisfy some condition in our family,
but it does not help
us choose the right condition. Experiments in Section 5 suggest C eor , U? is effective with very
simple weak-learners compared to popular boosting algorithms. (Here U? ? B?eor is the edge-overrandom baseline closest to uniform; it has weight (1 ? ?)/k on incorrect labels and (1 ? ?)/k + ?
on the correct label.) However, there are theoretical examples showing each condition in our family
is too strong (supplement).
A perhaps extreme way of weakening the condition is by requiring the performance on a cost matrix
to be competitive not with a fixed baseline B ? B?eor , but with the worst of them:
?C ? C eor , ?h ? H : C ? 1h ? maxeor C ? B.
B?B?
(3)
Condition (3) states that during the course of the same boosting game, Weak-Learner may choose
to beat any edge-over-random baseline B ? B?eor , possibly a different one for every round and every
cost-matrix. This may superficially seem much too weak. On the contrary, this condition turns out
to be equivalent to boostability. In other words, according to our criterion, it is neither too weak nor
too strong as a weak-learning condition. However, unlike the edge-over-random conditions, it also
turns out to be more difficult to work with algorithmically.
Furthermore, this condition can be shown to be equivalent to the one used by AdaBoost.MR [19, 9].
This is perhaps remarkable since the latter is based on the apparently completely unrelated all-pairs
MR
consists of
multiclass to binary reduction: the MR condition is given by (C MR , BMR
? ), where C
cost-matrices that put non-negative costs on incorrect labels and whose rows sum up to zero, while
m?k
BMR
is the matrix that has ? on the first column and ?? on all other columns(supplement).
? ?R
Further, the MR condition, and hence (3), can be shown to be neither too weak nor too strong.
Theorem 3 (MR). A weak classifier space H satisfies AdaBoost.MR?s weak-learning condition
(C MR , BMR
? ) if and only if it satisfies (3). Moreover, this condition is equivalent to being boostable.
Next, we illustrate the strengths of our random-over-edge weak-learning conditions through concrete
comparisons with previous algorithms.
Comparison with SAMME. The SAMME algorithm of [21] requires the weak classifiers to
achieve less error than uniform random guessing for multiple labels; in our language, their weaklearning condition is (C = {(?t, t, t, . . .) : t ? 0} , U? ). As is well-known, this condition is
not sufficient for boosting to be possible. In particular, consider the dataset {(a, 1), (b, 2)} with
k = 3, m = 2, and a weak classifier space consisting of h1 , h2 which always predict 1, 2, respectively. Since neither classifier distinguishes between a, b we cannot achieve perfect accuracy by
combining them in any way. Yet, due to the constraints on the cost-matrix, one of h1 , h2 will always
manage non-positive cost while random always suffers positive cost. On the other hand our weaklearning condition allows the Booster to choose far richer cost matrices. In particular, when the
cost matrix is C = (c(1) = (?1, +1, 0), c(2) = (+1, ?1, 0)) ? C eor , both classifiers in the above
example suffer more loss than the random player U? , and fail to satisfy our condition.
Comparison with AdaBoost.MH. AdaBoost.MH is a popular multiclass boosting algorithm that is
based on the one-against-all reduction[19]. However, we show that its implicit demands on the weak
classifier space is too strong. We construct a classifier space that satisfies the condition (C eor , U? )
in our family, but cannot satisfy AdaBoost.MH?s weak-learning condition.
Consider a space H that has, for every (1/k + ?)m element subset of the examples, a classifier
that predicts correctly on exactly those elements. The expected loss of a randomly chosen classifier
from this space is the same as that of the random player U? . Hence H satisfies this weak-learning
condition. On the other hand, it can be shown (supplement) that AdaBoost.MH?s weak-learning
MH
condition is the pair (C MH , BMH
has non-(positive)negative entries on (in)correct labels,
? ), where C
and where each row of the matrix BMH
is
the
vector (1/2 + ?/2, 1/2 ? ?/2, . . . , 1/2 ? ?/2). A
?
5
quick calculation shows that for any h ? H, and C ? C MH with ?1 in the first column and zeroes
elsewhere, C ? 1h ? BMH
= 1/2 ? 1/k. This is positive when k > 2, so that H fails to satisfy
?
AdaBoost.MH?s condition.
4
Algorithms
In this section we devise algorithms by analyzing the boosting games that employ our edge-overrandom weak-learning conditions. We compute the optimum Booster strategy against a completely
adversarial Weak-Learner, which here is permitted to choose weak classifiers without restriction,
i.e. the entire space Hall of all possible functions mapping examples to labels. By modeling WeakLearner adversarially, we make absolutely no assumptions on the algorithm it might use. Hence,
error guarantees enjoyed in this situation will be universally applicable. Our algorithms are derived
from the very general drifting games framework [16] for solving boosting games, in turn inspired
by Freund?s Boost-by-majority algorithm [6], which we review next.
The OS Algorithm. Fix the number of rounds T and an edge-over-random weak-learning condition
(C, B). For simplicity of presentation we fix the weights ?t = 1 in each round. With fT defined as
in (1), the optimum Booster payoff can be written as
m
X
min
max
. . . min
max
(1/m)
L(fT (xi , 1), fT (xi , 2), . . . , fT (xi , k)).
C1 ?C
h1 ?Hall :
C1 ?(1h1 ?B)?0
CT ?C
hT ?Hall :
CT ?(1hT ?B)?0
i=1
Here the function L : Rk ? R is error, but we can also consider other loss functions such as
exponential loss, hinge loss, etc. that upper-bound error and are proper: i.e. L(x) is increasing in
the weight of the correct label x(1), and decreasing in the weights of the incorrect labels x(l), l 6= 1.
Directly analyzing the optimal payoff is hard. However, Schapire [16] observed that the payoffs
can be very well approximated by certain potential functions. Indeed, for any b ? Rk define the
k
potential function ?b
t : R ? R by the following recurrence:
?b
?b
min
max
El?p ?b
0 = L;
t (s) =
t?1 (s + el ) : El?p [c(l)] ? hb, ci , (4)
c?Rk :?l:c(1)?c(l) p??{1,...,k}
where el ? Rk is the unit-vector whose lth coordinate is 1 and the remaining coordinates zero.
These potential functions compute an estimate ?b
t (st ) of whether an example x will be misclassified,
based on its current state st consisting of counts of votes received so far on various classes st (l) =
Pt?1
0
t0 =1 1 [ht (x) = l], and the number of rounds t remaining. Using these functions, Schapire [16]
proposed a Booster strategy, aka the OS strategy, which, in round t, constructs a cost matrix C ? C,
whose each row C(i) achieves the minimum of the right hand side of (4) with b replaced by B(i), t
replaced by T ? t, and s replaced by current state st (i). The following theorem provides a guarantee
for the loss suffered by the OS algorithm, and also shows that it is the game-theoretically optimum
strategy when the number of examples is large.
Theorem 4 (Extension of results in [16]). Suppose the weak-learning condition is given by (C, B), If
Pm B(i)
Booster employs the OS algorithm, then the average potential of the states (1/m) i=1 ?t (s(i))
never increases in any round. In particular, loss suffered after T rounds of play is at most
Pm B(i)
(1/m) i=1 ?T (0). Further, for any > 0, when the loss function satisfies some mild conditions, and m T, k, 1/, no Booster strategy can achieve loss less than the above bound in T
rounds.
Computing the potentials. In order to implement the OS strategy using our weak-learning conditions, we only need to compute the potential ?b
t for distributions b ? ? {1, . . . , k}. Fortunately,
these potentials have a very simple solution in terms of the homogeneous random-walk Rtb (x), the
random position of a particle after t time steps, that starts at location x ? Rk , and in each step moves
in direction el with probability b(l).
Theorem 5. If L is proper, and b ? ? {1, . . . , k} satisfies ?l : b(1) ? b(l), then ?b
t (s) =
E [L (Rtb (s))]. Furthermore, the vector achieving the minimum in the right hand side of (4) is
given by c(l) = ?b
t?1 (s + el ).
Theorem (5) implies the OS strategy chooses the following cost matrix in round t: c(i, l) =
b(i)
?T ?t?1 (st (i) + el ), where st (i) is the state of example i in round t. Therefore everything boils
6
down to computing the potentials, which is made possible by Theorem 5. There is no simple closed
form solution for the non-convex 0-1 loss L(s) = 1[s1 ? (maxi>1 si )]. However, using Theorem 4, we can write the potential ?t (s) explicitly, and then compute it using dynamic programming
in O(t3 k) time. This yields very tight bounds.
To obtain a more efficient procedure, and one that we will soon show can be made adaptive, we next
focus on the exponential loss associated with AdaBoost that does have a closed form solution.
Lemma 1. If L(s) = exp(?2 (s2 ? s1 )) + ? ? ? + exp(?k (sk ? s1 )), where each ?l is positive, then
Pk
t ?l (sl ?s1 )
the solution in Theorem 5 evaluates to ?b
, where al = 1 ? (b1 + bl ) +
t (s) =
l=2 (al ) e
?l
??l
e bl + e b1 .
The proof by induction is straightforward. In particular, when the condition is (C eor , U? ) and
Pk
? = (?, ?, . . .), the relevant potential is ?t (s) = ?(?, ?)t l=2 e?(sl ?s1 ) where ?(?, ?) =
1 + (1??)
(e? + e?? ? 2) ? (1 ? e?? ) ?. The cost-matrix output by the OS algorithm can be
k
simplified by rescaling, or adding the same number to each coordinate of a cost vector, without
affecting the constraints it imposes on a weak classifier, to the following form
(
(e? ? 1) e?(sl ?s1 )
if l > 1,
Pk
c(i, l) =
(5)
(e?? ? 1) j=2 e?(sj ?s1 ) if l = 1,
With such
Pm a choice, Theorem 4 and the form of the potential guarantee that the average loss
(1/m) i=1 L(st (i)) of the states st (i) changes by a factor of at most ? (?, ?) every round. Hence
T
the final loss is at most (k ? 1)? (?, ?) .
Variable edges. So far we have required Weak-Learner to beat random by at least a fixed amount
? > 0 in each round of the boosting game. In reality, the edge over random is larger initially,
and gets smaller as the OS algorithm creates harder cost matrices. Therefore requiring a fixed
edge is either unduly pessimistic or overly optimistic. If the fixed edge is too small, not enough
progress is made in the initial rounds, and if the edge is too large, Weak-Learner fails to meet the
weak-learning condition in latter rounds. We attempt to fix this via two approaches: prescribing a
decaying sequence of edges ?1 , . . . , ?T , or being completely flexible, aka adaptive, with respect to
the edges returned by the weak-learner. In either case, we only use the edge-over-random condition
(C eor , U? ), but with varying values of ?.
Fixed sequence of edges. With a prescribed sequence of edges ?1 , . . . , ?T the weak-learning condition (C eor , U?t ) in each round t is different. We allow the weights ?1 , . . . , ?T to be arbitrary, but they
must be fixed in advance. All the results for uniform ? and weights ?t = 1 hold in this case as well.
Pm Pk
In particular, by the arguments leading to (5), if we want to minimize i=1 l=2 e{ft (i,l)?ft (i,1)} ,
where ft is as defined in (1), then the following strategy is optimal: in round t output the cost matrix
(
if l > 1,
(e?t ? 1) eft?1 (i,j)?ft?1 (i,1)
Pk
C(i, l) =
(6)
(e??t ? 1) j=2 eft?1 (i,j)?ft?1 (i,1) if l = 1.
Pm Pk
This will ensure that the expression i=1 l=2 e{ft (i,l)?ft (i,1)} changes by a factor of at most
QT
?(?t , ?t ) in each round. Hence the final loss will be at most (k ? 1) t=1 ?(?t , ?t ).
Adaptive. In the adaptive setting, we depart from the game-theoretic framework in that WeakLearner is no longer adversarial. Further, we are no longer guaranteed to receive a certain sequence
of edges. Since the choice of cost-matrix in (6) does not depend on the edges, we could fix an
arbitrary set of weights ?t in advance, follow the same algorithm as before and enjoy the same bound
QT
t=1 ?(?t , ?t ). The trouble with this is ?(?t , ?t ) is not less than 1 unless ?t is small compared to
?t . To ensure progress, the weight ?t must be chosen adaptively as a function of ?t . Since we do not
know what edge we will receive, we choose the cost matrix as before but anticipating infinitesimally
small edge, in the spirit of [7], (and with some rescaling)
(
(e? ? 1) eft?1 (i,j)?ft?1 (i,1)
if l > 1,
M 1
Pk
C(i, l) = lim C? (i, l) =
??
f
(i,j)?f
(i,1)
t?1
t?1
??0
? (e ? 1) j=2 e
if l = 1.
(
eft?1 (i,j)?ft?1 (i,1)
if l > 1,
Pk
=
(7)
? j=2 eft?1 (i,j)?ft?1 (i,1) if l = 1.
7
pendigits
100
500
5
20
100
500
0.20
0.20
0.08
0.30
0.14
0.3
0.1
0.0
20
satimage
0.40
0.8
0.4
0.5
0.3
5
poker
0.50
letter
0.5
forest
0.7
0.30 0.35 0.40
connect4
5
20
100
500
5
20 50
200
5
20
100
500
5
20 50
200
(a)
300
500
pendigits
poker
satimage
0.10 0.15 0.20 0.25
0.5
1.0
0 100
300
500
0.50
0.3
0.6
0 100
300
500
0.40
0.1
0.4
0.4
0.6
0.36
0.32
0 100
letter
0.8
1.0
forest
0.8
0.40
connect4
0 100
300
500
0 100
300
500
0 100
300
500
(b)
Figure 1: Figure 1(a) plots the final test-errors of M1(black, dashed), MH(blue, dotted) and New method(red,
solid) against the maximum tree-sizes allowed as weak classifiers. Figure 1(b) plots how fast the test-errors of
these algorithms drop with rounds, when the maximum tree-size allowed is 5.
Since Weak-Learner cooperates, we expect the edge ?t of the returned classifier ht on the supplied
cost-matrix lim??0 C? to be more than just infinitesimal. In that case, by continuity, there are noninfinitesimal choices of the weight ?t such that the edge ?t achieved by ht on the cost-matrix C?t
remains large enough to ensure ?(?t , ?t ) < 1. In fact, with any choice of ?t , we
get ?(?t , ?t ) ?
1+?t
1
1
1
?t
??t
?t
??t
) ?t + 2 (e + e
? 2) (supplement). Tuning ?t to 2 ln 1??
1 ? 2 (e ? e
results in
t
p
2
loss, o
and hence error, after
? (?t , ?t ) ? 1 ? ?t . This algorithm is adaptive, and ensures
n that the
QT p
PT
2
2
T rounds is at most (k ? 1) t=1 1 ? ?t ? (k ? 1) exp ?(1/2) t=1 ?t .
5
Experiments
We report preliminary experimental results on six, varying multiclass UCI datasets.
0.0
0.1
0.2
0.3
0.4
The first set of experiments were aimed at determining
overall performance of our new algorithm. We compared
MH
a standard implementation M1 of AdaBoost.M1 with C4.5
M1
New Method
as weak learner, and the Boostexter implementation MH
of AdaBoost.MH using stumps [20], with the adaptive
algorithm described in Section 4, which we call New
method, using a naive greedy tree-searching algorithm
Greedy for weak-learner. The size of trees was chosen
to be of the same order as the tree sizes used by M1. Test
errors after 500 rounds of boosting are plotted in Figure 2.
The performance is comparable with M1 and far better
than MH (understandably since stumps are far weaker than
trees), even though our weak-learner is very naive com- Figure 2: This is a plot of the final test-errors
of standard implementations of M1, MH and
pared to C4.5.
connect4
forest
letter
pendigits
poker
satimage
New method after 500 rounds of boosting.
We next investigated how each algorithm performs with
less powerful weak-classifiers, namely, decision trees whose size has been sharply limited to various
pre-specified limits. Figure 1(a) shows test-error plotted as a function of tree size. As predicted by
our theory, our algorithm succeeds in boosting the accuracy even when the tree size is too small
to meet the stronger weak learning assumptions of the other algorithms. The differences in performance are particularly strong when using the smallest tree sizes.
More insight is provided by plots in Figure 1(b) of the rate of convergence of test error with rounds
when the tree size allowed is very small (5). Both M1 and MH drive down the error for a few rounds.
But since boosting keeps creating harder cost-matrices, very soon the small-tree learning algorithms
are no longer able to meet the excessive requirements of M1 and MH. However, our algorithm makes
more reasonable demands that are easily met by the weak learner.
8
References
[1] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal stragies and minimax lower bounds for online convex games. In Proceedings of the Nineteenth Annual Conference on
Computational Learning Theory, pages 415?424, 2008.
[2] Erin L. Allwein, Robert E. Schapire, and Yoram Singer. Reducing multiclass to binary: A unifying
approach for margin classifiers. Journal of Machine Learning Research, 1:113?141, 2000.
[3] Alina Beygelzimer, John Langford, and Pradeep Ravikumar. Error-correcting tournaments. In Algorithmic Learning Theory: 20th International Conference, pages 247?262, 2009.
[4] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via error-correcting
output codes. Journal of Artificial Intelligence Research, 2:263?286, January 1995.
[5] G?unther Eibl and Karl-Peter Pfeiffer. Multiclass boosting for weak classifiers. Journal of Machine Learning Research, 6:189?210, 2005.
[6] Yoav Freund. Boosting a weak learning algorithm by majority.
121(2):256?285, 1995.
Information and Computation,
[7] Yoav Freund. An adaptive version of the boost by majority algorithm. Machine Learning, 43(3):293?318,
June 2001.
[8] Yoav Freund and Manfred Opper. Continuous drifting games. Journal of Computer and System Sciences,
pages 113?132, 2002.
[9] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In Machine Learning:
Proceedings of the Thirteenth International Conference, pages 148?156, 1996.
[10] Yoav Freund and Robert E. Schapire. Game theory, on-line prediction and boosting. In Proceedings of
the Ninth Annual Conference on Computational Learning Theory, pages 325?332, 1996.
[11] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, August 1997.
[12] Trevor Hastie and Robert Tibshirani. Classification by pairwise coupling. Annals of Statistics, 26(2):451?
471, 1998.
[13] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error
of combined classifiers. Annals of Statistics, 30(1), February 2002.
[14] Gunnar R?atsch and Manfred K. Warmuth. Efficient margin maximizing with boosting. Journal of Machine
Learning Research, 6:2131?2152, 2005.
[15] Robert E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197?227, 1990.
[16] Robert E. Schapire. Drifting games. Machine Learning, 43(3):265?291, June 2001.
[17] Robert E. Schapire. The boosting approach to machine learning: An overview. In MSRI Workshop on
Nonlinear Estimation and Classification, 2002.
[18] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651?1686, October 1998.
[19] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions.
Machine Learning, 37(3):297?336, December 1999.
[20] Robert E. Schapire and Yoram Singer. BoosTexter: A boosting-based system for text categorization.
Machine Learning, 39(2/3):135?168, May/June 2000.
[21] Ji Zhu, Hui Zou, Saharon Rosset, and Trevor Hastie. Multi-class AdaBoost. Statistics and Its Interface,
2:349360, 2009.
9
| 4135 |@word mild:3 eor:21 version:1 stronger:2 hu:1 simplifying:1 jacob:1 tr:1 solid:1 harder:2 reduction:5 initial:1 necessity:1 efficacy:1 current:3 bmr:3 com:1 beygelzimer:1 si:1 yet:1 written:1 must:2 john:1 designed:2 plot:4 drop:1 greedy:2 intelligence:1 guess:1 warmuth:1 ith:1 manfred:2 provides:2 boosting:55 location:1 simpler:1 direct:1 incorrect:4 consists:2 combine:3 underfitting:2 introduce:3 pairwise:1 theoretically:1 tagging:1 expected:2 indeed:2 nor:4 multi:1 inspired:1 relying:1 decreasing:1 increasing:1 provided:2 discover:1 moreover:2 notation:1 unrelated:1 what:3 kind:1 developed:1 finding:1 nj:1 guarantee:7 formalizes:1 every:10 voting:1 exactly:2 classifier:54 rm:6 unit:1 omit:1 enjoy:1 positive:5 before:2 understood:2 limit:1 analyzing:2 meet:4 cooperates:1 might:3 black:1 pendigits:3 pared:1 studied:1 tournament:1 koltchinskii:1 specifying:1 limited:1 range:1 practical:3 unique:2 testing:1 union:1 implement:1 differs:1 digit:1 procedure:1 empirical:1 significantly:1 reject:1 word:1 pre:1 refers:1 confidence:1 suggest:1 get:2 cannot:2 put:2 restriction:7 equivalent:4 imposed:1 quick:1 missing:2 maximizing:1 straightforward:2 convex:3 simplicity:1 correcting:2 m2:2 rule:2 insight:1 importantly:1 deriving:1 searching:3 notion:2 coordinate:3 annals:3 pt:2 play:2 heavily:1 suppose:1 exact:1 programming:1 homogeneous:2 element:2 recognition:1 particularly:2 approximated:1 mukherjee:1 predicts:3 boostability:6 labeled:1 observed:2 ft:16 role:1 weaklearner:2 capture:2 worst:1 ensures:1 sun:1 decrease:1 panchenko:1 ideally:1 dynamic:1 argmaxl:1 rewrite:1 solving:2 tight:1 depend:1 classifer:1 creates:2 serve:1 learner:23 completely:3 easily:1 mh:20 various:2 fewest:1 fast:2 effective:2 describe:1 artificial:1 labeling:2 choosing:1 abernethy:1 whose:8 apparent:1 widely:1 larger:2 richer:1 say:1 nineteenth:1 otherwise:1 statistic:4 validates:1 final:5 seemingly:2 online:1 sequence:4 interaction:1 product:2 relevant:1 combining:2 uci:1 translate:1 achieve:4 boostexter:2 frobenius:1 convergence:1 requirement:6 neumann:1 optimum:3 produce:1 categorization:1 perfect:2 object:1 help:2 derive:3 illustrate:1 coupling:1 qt:3 received:1 progress:2 strong:8 c:1 involves:1 come:1 implies:1 predicted:1 met:1 direction:1 mulation:1 correct:11 modifying:1 stringent:1 everything:1 bin:6 require:3 fix:4 generalization:5 plurality:1 preliminary:1 pessimistic:1 indraneel:1 extension:4 strictly:1 clarify:1 hold:1 lying:1 sufficiently:1 hall:3 noninfinitesimal:1 exp:3 algorithmic:2 predict:4 mapping:1 m0:1 achieves:2 adopt:1 smallest:1 purpose:2 estimation:1 applicable:2 label:21 create:2 weighted:2 hope:1 always:4 aim:1 avoid:1 varying:3 allwein:1 derived:1 focus:4 june:3 check:1 aka:2 adversarial:3 baseline:11 sense:6 el:7 prescribing:1 entire:3 weakening:1 initially:1 misclassified:1 provably:1 overall:1 classification:9 flexible:1 denoted:6 genuine:1 construct:2 never:1 adversarially:1 broad:2 excessive:1 parametrizes:1 others:1 report:1 employ:2 distinguishes:1 few:1 randomly:3 wee:1 individual:1 replaced:3 argmax:1 consisting:2 attempt:1 highly:5 analyzed:1 extreme:1 pradeep:1 accurate:5 edge:27 necessary:8 machinery:1 unless:2 tree:12 connect4:3 walk:2 plotted:2 theoretical:3 minimal:2 instance:1 formalism:1 column:3 modeling:1 ghulum:1 yoav:7 ordinary:1 cost:35 entry:3 subset:1 predictor:3 uniform:3 recognizing:1 too:19 learnability:1 characterize:1 answer:2 rosset:1 combined:8 chooses:1 st:8 adaptively:1 international:2 lee:1 together:2 ym:1 concrete:1 von:1 central:1 satisfied:2 manage:1 choose:5 possibly:1 henceforth:1 booster:14 creating:1 leading:1 return:2 rescaling:2 potential:12 stump:3 bold:2 erin:1 satisfy:9 explicitly:1 vi:1 later:1 view:2 h1:4 closed:2 optimistic:1 analyze:2 apparently:1 red:1 competitive:1 relied:1 start:1 decaying:1 minimize:4 accuracy:5 who:2 efficiently:3 t3:1 identify:1 yield:1 generalize:1 weak:116 thumb:1 drive:5 presume:1 suffers:1 trevor:2 infinitesimal:1 against:4 evaluates:1 proof:3 associated:1 boil:1 dataset:1 popular:2 logical:1 knowledge:1 lim:2 carefully:1 anticipating:1 follow:1 adaboost:27 permitted:1 improved:1 sufficiency:1 though:3 generality:1 furthermore:3 just:1 implicit:3 langford:1 hand:7 working:1 o:8 nonlinear:1 lack:1 continuity:1 perhaps:3 rtb:2 dietterich:1 requiring:4 verify:1 true:1 former:1 equality:1 hence:6 round:31 game:27 during:1 eligibility:1 recurrence:1 criterion:2 complete:1 theoretic:7 performs:1 saharon:1 interface:1 image:1 meaning:2 consideration:1 novel:2 common:1 ji:1 overview:1 exponentially:1 discussed:2 interpretation:1 m1:12 eft:5 enjoyed:1 tuning:1 mathematics:1 pm:5 particle:1 language:3 had:2 longer:3 etc:1 closest:1 perspective:2 certain:7 samme:3 binary:16 yi:4 exploited:1 devise:1 captured:1 minimum:3 additional:1 greater:1 impose:2 mr:11 employed:1 relaxed:1 fortunately:1 dashed:1 multiple:1 desirable:1 calculation:1 ravikumar:1 impact:1 prediction:3 vision:1 essentially:1 achieved:1 c1:2 receive:2 affecting:1 want:2 thirteenth:1 suffered:2 unlike:2 member:3 contrary:1 december:1 spirit:2 seem:1 effectiveness:1 call:3 exceed:1 enough:4 hb:1 variety:2 hastie:2 identified:2 inner:2 knowing:1 multiclass:29 t0:1 whether:1 expression:1 six:1 bartlett:2 unther:1 suffer:2 peter:3 returned:5 speech:1 cause:1 hardly:1 tewari:1 clear:1 involve:1 aimed:2 amount:1 aiding:1 schapire:15 generate:1 specifies:2 exist:1 sl:3 supplied:1 dotted:1 arising:1 correctly:2 algorithmically:1 overly:1 blue:1 tibshirani:1 conform:1 write:1 msri:1 express:1 key:1 gunnar:1 reformulation:1 achieving:2 capital:1 alina:1 prevent:2 neither:4 ht:10 vast:2 merely:1 sum:1 letter:5 powerful:4 family:12 eligible:2 reasonable:1 decision:3 comparable:1 bound:6 ct:5 followed:1 played:1 guaranteed:1 correspondence:1 annual:2 nontrivial:1 strength:2 constraint:4 precisely:1 sharply:1 argument:2 min:3 prescribed:1 infinitesimally:1 department:1 according:4 combination:4 smaller:1 slightly:1 wi:1 appealing:3 making:2 s1:7 taken:2 ln:1 previously:5 remains:1 turn:8 count:1 fail:4 singer:3 know:2 end:1 studying:2 appropriate:2 drifting:5 existence:1 thomas:1 remaining:2 include:2 ensure:3 trouble:1 hinge:1 unifying:1 exploit:1 yoram:3 especially:1 bakiri:1 february:1 bl:2 move:1 depart:1 strategy:15 costly:2 traditional:1 guessing:4 said:1 poker:3 separate:1 majority:4 induction:1 code:1 providing:1 minimizing:1 difficult:1 october:1 robert:12 holding:1 stated:1 negative:3 design:2 implementation:3 proper:3 upper:1 datasets:1 incorrectly:1 beat:2 situation:1 payoff:3 january:1 precise:2 y1:1 ninth:1 arbitrary:3 august:1 introduced:1 pair:2 required:1 namely:1 specified:1 c4:2 unduly:1 boost:3 able:3 usually:1 below:3 xm:1 ambuj:1 including:1 max:3 explanation:1 misclassification:2 natural:4 predicting:2 indicator:1 pfeiffer:1 zhu:2 minimax:3 rated:1 created:1 naive:2 text:1 prior:2 understanding:4 literature:1 review:1 determining:1 lacking:1 loss:17 freund:9 expect:1 remarkable:1 h2:2 incurred:2 sufficient:5 imposes:1 playing:1 classifying:1 row:7 karl:1 course:1 elsewhere:1 placed:1 soon:2 side:2 allow:3 weaker:2 taking:1 benefit:2 opper:1 superficially:1 rich:1 computes:1 collection:1 adaptive:11 clue:1 simplified:2 universally:1 made:3 far:6 employing:1 sj:1 implicitly:3 keep:1 overfitting:2 mm0:1 b1:2 assumed:2 xi:7 continuous:2 sk:1 reality:1 forest:3 hc:1 investigated:1 zou:1 understandably:1 pk:8 main:2 linearly:1 whole:3 underfit:1 s2:1 bounding:1 allowed:3 x1:1 fails:2 position:1 exponential:2 down:6 rk:6 theorem:13 specific:1 showing:1 maxi:1 explored:1 faint:1 rakhlin:1 weakest:2 exists:3 consist:1 workshop:1 classfier:1 effectively:2 adding:1 ci:1 supplement:4 hui:1 demand:4 margin:4 mildly:1 simply:1 likely:1 expressed:1 applies:1 satisfies:11 lth:1 goal:1 presentation:1 satimage:3 hard:1 change:2 specifically:2 reducing:1 averaging:1 lemma:1 called:3 experimental:1 player:7 vote:2 succeeds:1 atsch:1 formally:1 latter:3 meant:1 alexander:1 absolutely:1 constructive:1 princeton:3 phenomenon:1 |
3,463 | 4,136 | Tiled convolutional neural networks
Quoc V. Le, Jiquan Ngiam, Zhenghao Chen, Daniel Chia, Pang Wei Koh, Andrew Y. Ng
Computer Science Department, Stanford University
{quocle,jngiam,zhenghao,danchia,pangwei,ang}@cs.stanford.edu
Abstract
Convolutional neural networks (CNNs) have been successfully applied to many
tasks such as digit and object recognition. Using convolutional (tied) weights
significantly reduces the number of parameters that have to be learned, and also
allows translational invariance to be hard-coded into the architecture. In this paper, we consider the problem of learning invariances, rather than relying on hardcoding. We propose tiled convolution neural networks (Tiled CNNs), which use
a regular ?tiled? pattern of tied weights that does not require that adjacent hidden
units share identical weights, but instead requires only that hidden units k steps
away from each other to have tied weights. By pooling over neighboring units,
this architecture is able to learn complex invariances (such as scale and rotational
invariance) beyond translational invariance. Further, it also enjoys much of CNNs?
advantage of having a relatively small number of learned parameters (such as ease
of learning and greater scalability). We provide an efficient learning algorithm for
Tiled CNNs based on Topographic ICA, and show that learning complex invariant
features allows us to achieve highly competitive results for both the NORB and
CIFAR-10 datasets.
1
Introduction
Convolutional neural networks (CNNs) [1] have been successfully applied to many recognition
tasks. These tasks include digit recognition (MNIST dataset [2]), object recognition (NORB
dataset [3]), and natural language processing [4]. CNNs take translated versions of the same basis function, and ?pool? over them to build translational invariant features. By sharing the same
basis function across different image locations (weight-tying), CNNs have significantly fewer learnable parameters which makes it possible to train them with fewer examples than if entirely different
basis functions were learned at different locations (untied weights). Furthermore, CNNs naturally
enjoy translational invariance, since this is hard-coded into the network architecture. However, one
disadvantage of this hard-coding approach is that the pooling architecture captures only translational
invariance; the network does not, for example, pool across units that are rotations of each other or
capture more complex invariances, such as out-of-plane rotations.
Is it better to hard-code translational invariance ? since this is a useful form of prior knowledge ?
or let the network learn its own invariances from unlabeled data? In this paper, we show that the
latter is superior and describe an algorithm that can do so, outperforming convolutional methods. In
particular, we present tiled convolutional networks (Tiled CNNs), which use a novel weight-tying
scheme (?tiling?) that simultaneously enjoys the benefit of significantly reducing the number of
learnable parameters while giving the algorithm flexibility to learn other invariances. Our method is
based on only constraining weights/basis functions k steps away from each other to be equal (with
the special case of k = 1 corresponding to convolutional networks).
In order to learn these invariances from unlabeled data, we employ unsupervised pretraining, which
has been shown to help performance [5, 6, 7]. In particular, we use a modification of Topographic
ICA (TICA) [8], which learns to organize features in a topographical map by pooling together groups
1
Figure 1: Left: Convolutional Neural Networks with local receptive fields and tied weights. Right:
Partially untied local receptive field networks ? Tiled CNNs. Units with the same color belong to the
same map; within each map, units with the same fill texture have tied weights. (Network diagrams
in the paper are shown in 1D for clarity.)
of related features. By pooling together local groups of features, it produces representations that are
robust to local transformations [9]. We show in this paper how TICA can be efficiently used to
pretrain Tiled CNNs through the use of local orthogonality.
The resulting Tiled CNNs pretrained with TICA are indeed able to learn invariant representations,
with pooling units that are robust to both scaling and rotation. We find that this improves classification performance, enabling Tiled CNNs to be competitive with previously published results on the
NORB [3] and CIFAR-10 [10] datasets.
2
Tiled CNNs
CNNs [1, 11] are based on two key concepts: local receptive fields, and weight-tying. Using local
receptive fields means that each unit in the network only ?looks? at a small, localized region of the
input image. This is more computationally efficient than having full receptive fields, and allows
CNNs to scale up well. Weight-tying additionally enforces that each first-layer (simple) unit shares
the same weights (see Figure 1-Left). This reduces the number of learnable parameters, and (by
pooling over neighboring units) further hard-codes translational invariance into the model.
Even though weight-tying allows one to hard-code translational invariance, it also prevents the pooling units from capturing more complex invariances, such as scale and rotation invariance. This is
because the second layer units are constrained to pool over translations of identical bases. In this
paper, rather than tying all of the weights in the network together, we instead develop a method that
leaves nearby bases untied, but far-apart bases tied. This lets second-layer units pool over simple
units that have different basis functions, and hence learn a more complex range of invariances.
We call this local untying of weights ?tiling.? Tiled CNNs are parametrized by a tile size k: we
constrain only units that are k steps away from each other to be tied. By varying k, we obtain a
spectrum of models which trade off between being able to learn complex invariances, and having
few learnable parameters. At one end of the spectrum we have traditional CNNs (k = 1), and at the
other, we have fully untied simple units.
Next, we will allow our model to use multiple ?maps,? so as to learn highly overcomplete representations. A map is a set of pooling units and simple units that collectively cover the entire image
(see Figure 1-Right). When varying the tiling size, we change the degree of weight tying within
each map; for example, if k = 1, the simple units within each map will have the same weights. In
our model, simple units in different maps are never tied. By having units in different maps learn
different features, our model can learn a rich and diverse set of features. Tiled CNNs with multiple
maps enjoy the twin benefits of (i) being able to represent complex invariances, by pooling over
(partially) untied weights, and (ii) having a relatively small number of learnable parameters.
2
Figure 2: Left: TICA network architecture. Right: TICA first layer filters (2D topography, 25 rows
of W ).
Unfortunately, existing methods for pretraining CNNs [11, 12] are not suitable for untied weights;
for example, the CDBN algorithm [11] breaks down without the weight-tying constraints. In the
following sections, we discuss a pretraining method for Tiled CNNs based on the TICA algorithm.
3
Unsupervised feature learning via TICA
TICA is an unsupervised learning algorithm that learns features from unlabeled image patches.
A TICA network [9] can be described as a two-layered network (Figure 2-Left), with square and
square-root nonlinearities in the first and second layers respectively. The weights W in the first
layer are learned, while the weights V in the second layer are fixed and hard-coded to represent
the neighborhood/topographical structure of the neurons in the first layer. Specifically, each second
layer hidden unit pi pools over a small neighborhood of adjacent first layer units hi . We call the hi
and pi simple and pooling units, respectively.
More precisely, given
pattern x(t) , the activation of each second layer unit is
qP an input
P
(t) 2
m
n
pi (x(t) ; W, V ) =
k=1 Vik (
j=1 Wkj xj ) . TICA learns the parameters W through finding
sparse feature representations in the second layer, by solving:
PT Pm
(t)
T
minimize
(1)
i=1 pi (x ; W, V ), subject to W W = I
t=1
W
where the input patterns {x(t) }Tt=1 are whitened.1 Here, W ? Rm?n and V ? Rm?m , where n
is the size of the input and m is the number of hidden units in a layer. V is a fixed matrix (Vij =
1 or 0) that encodes the 2D topography of the hidden units hi . Specifically, the hi units lie on a
2D grid, with each pi connected to a contiguous 3x3 (or other size) block of hi units.2 The case of
each pi being connected to exactly one hi corresponds to standard ICA. The orthogonality constraint
W W T = I provides competitiveness and ensures that the learned features are diverse.
One important property of TICA is that it can learn invariances even when trained only on unlabeled
data, as demonstrated in [8, 9]. This is due both to the pooling architecture, which gives rise to pooling units that are robust to local transformations of their inputs, and the learning algorithm, which
promotes selectivity by optimizing for sparsity. This combination of robustness and selectivity is
central to feature invariance, which is in turn essential for recognition tasks [13].
If we choose square and square-root activations for the simple and pooling units in the Tiled CNN,
we can view the Tiled CNN as a special case of a TICA network, with the topography of the pooling
units specifying the matrix V .3 Crucially, Tiled CNNs incorporate local receptive fields, which play
an important role in speeding up TICA. We discuss this next.
1
Whitening means that they have been linearly transformed to have zero mean and identity covariance.
For illustration, however, the figures in this paper depict xi , hi and pi in 1D and show a 1D topography.
3
The locality constraint, in addition to being biologically motivated by the receptive field organization
patterns in V1, is also a natural approximation to the original TICA algorithm as the original learned receptive
2
3
4
Local receptive fields in TICA
Tiled CNNs typically perform much better at object recognition when the learned representation
consists of multiple feature maps (Figure 1-Right). This corresponds to training TICA with an overcomplete representation (m > n). When learning overcomplete representations [14], the orthogonality constraint cannot be satisfied exactly, and we instead try to satisfy an approximate orthogonality constraint [15]. Unfortunately, these approximate orthogonality constraints are computationally
expensive and have hyperparameters which need to be extensively tuned. Much of this tuning can be
avoided by using score matching [16], but this is computationally even more expensive, and while
orthogonalization can be avoided altogether with topographic sparse coding, those models are also
expensive as they require further work either for inference at prediction time [9, 14] or for learning
a decoder unit at training time [17].
We can avoid approximate orthogonalization by using local receptive fields, which are inherently
built into Tiled CNNs. With these, the weight matrix W for each simple unit is constrained to be
0 outside a small local region. This locality constraint automatically ensures that the weights of
any two simple units with non-overlapping receptive fields are orthogonal, without the need for an
explicit orthogonality constraint. Empirically, we find that orthogonalizing partially overlapping
receptive fields is not necessary for learning distinct, informative features either.
However, orthogonalization is still needed to decorrelate units that occupy the same position in their
respective maps, for they look at the same region on the image. Fortunately, this local orthogonalization is cheap: for example, if there are l maps and if each receptive field is restricted to look at an
input patch that contains s pixels, we would only need to orthogonalize the rows of a l-by-s matrix
to ensure that the l features over these s pixels are orthogonal. Specifically, so long as l ? s, we can
demand that these l units that share an input patch be orthogonal. Using this method, we can learn
networks that are overcomplete by a factor of about s (i.e., by learning l = s maps), while having to
orthogonalize only matrices that are l-by-s. This is significantly lower in cost than standard TICA.
For l maps, our computational cost is O(ls2 n), compared to standard TICA?s O(l2 n3 ).
In general, we will have l ? k ? s learnable parameters for an input of size n. We note that setting
k to its maximum value of n ? s + 1 gives exactly the untied local TICA model outlined in the
previous section.4
5
Pretraining Tiled CNNs with local TICA
Algorithm 1 Unsupervised pretraining of Tiled CNNs with TICA (line search)
Input: {x(t) }Tt=1 , W, V, k, s
// k is the tile size, s is the receptive field size
Output: W
repeat
? P P rP
`P
? ?
qP
(t) 2
T
m
m
n
`
?
?
V
P
P
P
t=1
i=1
2
j=1 Wkj xj
k=1 ik
(t)
m
n
T
m
old
, g?
f
? t=1 i=1
k=1 Vik
j=1 Wkj xj
?W
f new ? +?, ? ? 1
while f new ? f old do
W new ? W ? ?g
W new ? localize(W new , s)
W new ? tie weights(W new , k)
W new ? orthogonalize
(W new )
qPlocal RF
`
?
P
P
P
m
n
new (t) 2
f new ? Tt=1 m
i=1
k=1 Vik
j=1 Wkj xj
? ? 0.5?
end while
W ? W new
until convergence
Our pretraining algorithm, which is based on gradient descent on the TICA objective function (1), is
shown in Algorithm 1. The innermost loop is a simple implementation of backtracking linesearch.
fields tend to be very localized, even without any explicit locality constraint. For example, when trained on
natural images, TICA?s first layer weights usually resemble localized Gabor filters (Figure 2-Right).
4
For a 2D input image of size nxn and local RF of size sxs, the maximum value of k is (n ? s + 1)2 .
4
In orthogonalize local RF (W new ), we only orthogonalize the weights that have completely overlapping receptive fields. In tie weights, we enforce weight-tying by averaging each set of tied
weights.
The algorithm is trained by batch projected gradient descent and usually requires little tuning of
optimization parameters. This is because TICA?s tractable objective function allows us to monitor
convergence easily. In contrast, other unsupervised feature learning algorithms such as RBMs [6]
and autoencoders [18] require much more parameter tuning, especially during optimization.
6
Experiments
6.1
Speed-up
We first establish that the local receptive fields
intrinsic to Tiled CNNs allows us to implement TICA learning for overcomplete representations in a much more efficient manner.
Figure 3 shows the relative speed-up of pretraining Tiled CNNs over standard TICA using approximate fixed-point orthogonalization
(W = 23 W ? 21 W W T W )[15]. These experiments were run on 10000 images of size 32x32
or 50x50, with s = 8.
We note that the weights in this experiment
were left fully untied, i.e., k = n?s+1. Hence,
the speed-up observed here is not from an efficient convolutional implementation, but purely
due to the local receptive fields. Overcoming Figure 3: Speed-up of Tiled CNNs compared to
this computational challenge is the key that al- standard TICA.
lows Tiled CNNs to successfully use TICA to
learn features from unlabeled data. 5
6.2
Classification on NORB
Next, we show that TICA pretraining for Tiled CNNs performs well on object recognition. We start
with the normalized-uniform set for NORB, which consists of 24300 training examples and 24300
test examples drawn from 5 categories. In our case, each example is a preprocessed pair of 32x32
images.6
In our classification experiments, we fix the size of each local receptive field to 8x8, and set V such
that each pooling unit pi in the second layer pools over a block of 3x3 simple units in the first layer,
without wraparound at the borders. The number of pooling units in each map is exactly the same as
the number of simple units. We densely tile the input images with overlapping 8x8 local receptive
fields, with a step size (or ?stride?) of 1. This gives us 25 ? 25 = 625 simple units and 625 pooling
units per map in our experiments on 32x32 images.
A summary of results is reported in Table 1.
6.2.1
Unsupervised pretraining
We first consider the case in which the features are learned purely from unsupervised data. In
particular, we use the NORB training set itself (without the labels) as a source of unsupervised data
5
All algorithms are implemented in MATLAB, and executed on a computer with 3.0GHz CPU, 9Gb RAM.
While orthogonalization alone is 104 times faster in Tiled CNNs, other computations such as gradient calculations reduce its overall speed-up factor to 10x-250x.
6
Each NORB example is a binocular pair of 96x96 images. To reduce processing time, we downsampled
each 96x96 image to 32x32 pixels. Hence, each simple unit sees 128 pixels from an 8x8 patch from each of the
two binocular images. The input was whitened using ZCA (Zero-Phase Components Analysis).
5
Table 1: Test set accuracy on NORB
Algorithm
Tiled CNNs (with finetuning) (Section 6.2.2)
Tiled CNNs (without finetuning) (Section 6.2.1)
Standard TICA (10x overcomplete)
Convolutional Neural Networks [19], [12]
3D Deep Belief Networks [19]
Support Vector Machines [20]
Deep Boltzmann Machines [21]
Accuracy
96.1%
94.5%
89.6%
94.1% , 94.4%
93.5%
88.4%
92.8 %
with which to learn the weights W of the Tiled CNN. We call this initial phase the unsupervised
pretraining phase.
After learning a feature representation from the unlabeled data, we train a linear classifier on the
output of the Tiled CNN network (i.e., the activations of the pooling units) on the labeled training
set. During this supervised training phase, only the weights of the linear classifier were learned,
while the lower weights of the Tiled CNN model remained fixed.
We train a range of models to investigate the role of the tile size k and the number of maps l.7 The test
set accuracy results of these models are shown in Figure 4-Left. Using a randomly sampled hold-out
validation set of 2430 examples (10%) taken from the training set, we selected a convolutional model
with 48 maps that achieved an accuracy of 94.5% on the test set, indicating that Tiled CNNs learned
purely on unsupervised data compare favorably to many state-of-the-art algorithms on NORB.
6.2.2
Supervised finetuning of W
Next, we study the effects of supervised finetuning [23] on the models produced by the unsupervised
pretraining phase. Supervised finetuning takes place after unsupervised pretraining, but before the
supervised training of the classifier.
Using softmax regression to calculate the gradients, we backpropagated the error signal from the
output back to the learned features in order to update W , the weights of the simple units in the Tiled
CNN model. During the finetuning step, the weights W were adjusted without orthogonalization.
The results of supervised finetuning on our models are shown in Figure 4-Right. As above, we used a
validation set comprising 10% of the training data for model selection. Models with larger numbers
of maps tended to overfit and hence performed poorly on the validation set. The best performing
fine-tuned model on the validation set was the model with 16 maps and k = 2, which achieved
a test-set accuracy of 96.1%. This substantially outperforms standard TICA, as well as the best
published results on NORB to this date (see Table 1).
6.2.3
Limited training data
To test the ability of our pretrained features
to generalize across rotations and lighting conditions given only a weak supervised signal,
we limited the labeled training set to comprise
only examples with a particular set of viewing angles and lighting conditions. Specifically,
NORB contains images spanning 9 elevations,
18 azimuths and 6 lighting conditions, and we
trained our linear classifier only on data with Figure 5: Test set accuracy on full and limited
elevations {2, 4, 6}, azimuths {10, 18, 24} and training sets
7
We used an SVM [22] as the linear classifier and determined C by cross-validation over
{10?4 , 10?3 , . . . , 104 }. Models were trained with various untied map sizes k ? {1, 2, 9, 16, 25} and number
of maps l ? {4, 6, 10, 16}. When k = 1, we were able to use an efficient convolutional implementation to
scale up the number of maps in the models, allowing us to train additional models with l ? {22, 36, 48}.
6
Figure 4: Left: NORB test set accuracy across various tile sizes and numbers of maps, without
finetuning. Right: NORB test set accuracy, with finetuning.
lighting conditions {1, 3, 5}. Thus, for each object instance, the linear classifier sees only 27 training
images, making for a total of 675 out of the possible 24300 training examples.
Using the pretrained network in Section 6.2.1, we trained a linear classifier on these 675 labeled
examples. We obtained an accuracy of 72.2% on the full test set using the model with k = 2 and
22 maps. A smaller, approximately 2.5x overcomplete model with k = 2 and 4 maps obtained an
accuracy of 64.9%. In stark contrast, raw pixel performance dropped sharply from 80.2% with a full
supervised training set, to a near-chance level of 20.8% on this limited training set (Figure 5).
These results demonstrate that Tiled CNNs perform well even with limited labeled data. This is most
likely because the partial weight-tying results in a relatively small number of learnable parameters,
reducing the need for large amounts of labeled data.
6.3
Classification on CIFAR-10
Table 2: Test set accuracy on CIFAR-10
Algorithm
Deep Tiled CNNs (s=4, with finetuning) (Section 6.3.2)
Tiled CNNs (s=8, without finetuning) (Section 6.3.1)
Standard TICA (10x, fixed-point orthogonalization)
Raw pixels [10]
RBM (one layer, 10000 units, finetuning) [10]
RBM (two layers, 10000 units, finetuning both layers) [10]
RBM (two layers, 10000 units, finetuning top layer) [10]
mcRBM (convolutional, trained on two million tiny images) [24]
Local Coordinate Coding (LCC) [25]
Improved Local Coordinate Coding (Improved LCC) [25]
Accuracy
73.1%
66.1%
56.1%
41.1%
64.8%
60.3%
62.2%
71.0%
72.3%
74.5%
The CIFAR-10 dataset contains 50000 training images and 10000 test images drawn from 10 categories.8 A summary of results for is reported in Table 2.
6.3.1
Unsupervised pretraining and supervised finetuning
As before, models were trained with tile size k ? {1, 2, 25}, and number of maps l ?
{4, 10, 16, 22, 32}. The convolutional model (k = 1) was also trained with l = 48 maps. This
48-map convolutional model performed the best on our 10% hold-out validation set, and achieved a
test set accuracy of 66.1%. We find that supervised finetuning of these models on CIFAR-10 causes
overfitting, and generally reduces test-set accuracy; the top model on the validation set, with 32
maps and k = 1, only achieves 65.1%.
8
Each CIFAR-10 example is a 32x32 RGB image, also whitened using ZCA. Hence, each simple unit sees
three patches from three channels of the color image input (RGB).
7
6.3.2
Deep Tiled CNNs
We additionally investigate the possibility of training a deep Tiled CNN in a greedy layer-wise
fashion, similar to models such as DBNs [6] and stacked autoencoders [26, 18]. We constructed
this network by stacking two Tiled CNNs, each with 10 maps and k = 2. The resulting four-layer
network has the structure W1 ? V1 ? W2 ? V2 , where the weights W1 are local receptive fields
of size 4x4, and W2 is of size 3x3, i.e., each unit in the third layer ?looks? at a 3x3 window of each
of the 10 maps in the first layer. These parameters were chosen by an efficient architecture search
[27] on the hold-out validation set. The number of maps in the third and fourth layer is also 10.
After finetuning, we found that the deep model outperformed all previous models on the validation
set, and achieved a test set accuracy of 73.1%. This demonstrates the potential of deep Tiled CNNs
to learn more complex representations.
6.4
Effects of optimizing the pooling units
When the tile size is 1 (i.e., a fully tied model), a na??ve approach to learn the filter weights is to
directly train the first layer filters using small patches (e.g., 8x8) randomly sampled from the dataset,
with a method such as ICA. This method is computationally more attractive and probably easier to
implement. Here, we investigate if such benefits come at the expense of classification accuracy.
We use ICA to learn the first layer weights on CIFAR-10 with 16 filters. These weights are then used
in a Tiled CNN with a tile size of 1 and 16 maps. This method is compared to pretraining the model
of the same architecture with TICA. For both methods, we do not use finetuning. Interestingly, classification on the test set show that the na??ve approach results in significantly reduced classification
accuracy: the na??ve approach obtains 51.54% on the test set, while pretraining with TICA achieves
58.66%. These results confirm that optimizing for sparsity of the pooling units results in better
features than just na??vely approximating the first layer weights.
7
Discussion and Conclusion
Our results show that untying weights is beneficial for classification performance. Specifically, we
find that selecting a tile size of k = 2 achieves the best results for both the NORB and CIFAR-10
datasets, even with deep networks. More importantly, untying weights allow the networks to learn
more complex invariances from unlabeled data. By visualizing [28, 29] the range of optimal stimulus
that activate each pooling unit in a Tiled CNN, we found units that were scale and rotationally
invariant.9 We note that a standard CNN is unlikely to be invariant to these transformations.
A natural choice of the tile size k would be to set it to the size of the pooling region p, which in this
case is 3. In this case, each pooling unit always combines simple units which are not tied. However,
increasing the tile size leads to a higher degree of freedom in the models, making them susceptible to
overfitting (learning unwanted non-stationary statistics of the dataset). Fortunately, the Tiled CNN
only requires unlabeled data for training, which can be obtained cheaply. Our preliminary results
on networks pretrained using 250000 unlabeled images from the Tiny images dataset [30] show that
performance increases as k goes from 1 to 3, flattening out at k = 4. This suggests that when there
is sufficient data to avoid overfitting, setting k = p can be a very good choice.
In this paper, we introduced Tiled CNNs as an extension of CNNs that support both unsupervised pretraining and weight tiling. The idea of tiling, or partial untying of filter weights, is a
parametrization of a spectrum of models which includes both fully-convolutional and fully-untied
weight schemes as natural special cases. Furthermore, the use of local receptive fields enable our
models to scale up well, producing massively overcomplete representations that perform well on
classification tasks. These principles allow Tiled CNNs to achieve competitive results on the NORB
and CIFAR-10 object recognition datasets. Importantly, tiling is directly applicable and can potentially benefit a wide range of other feature learning models.
Acknowledgements: We thank Adam Coates, David Kamm, Andrew Maas, Andrew Saxe, Serena
Yeung and Chenguang Zhu for insightful discussions. This work was supported by the DARPA Deep
Learning program under contract number FA8650-10-C-7020.
9
These visualizations are available at http://ai.stanford.edu/?quocle/.
8
References
[1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient based learning applied to document recognition.
Proceeding of the IEEE, 1998.
[2] P. Simard, D. Steinkraus, and J. Platt. Best practices for convolutional neural networks applied to visual
document analysis. In ICDAR, 2003.
[3] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to
pose and lighting. In CVPR, 2004.
[4] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks
with multitask learning. In ICML, 2008.
[5] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught learning:
Transfer learning from unlabeled data. In ICML, 2007.
[6] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006.
[7] D. Erhan, A. Courville, Y. Bengio, and P. Vincent. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 2010.
[8] A. Hyvarinen and P. Hoyer. Topographic independent component analysis as a model of V1 organization
and receptive fields. Neural Computation, 2001.
[9] A. Hyvarinen, J. Hurri, and P. Hoyer. Natural Image Statistics. Springer, 2009.
[10] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, U. Toronto, 2009.
[11] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009.
[12] M.A. Ranzato K. Jarrett, K. Kavukcuoglu and Y. LeCun. What is the best multi-stage architecture for
object recognition? In ICCV, 2009.
[13] I. Goodfellow, Q.V. Le, A. Saxe, H. Lee, and A.Y. Ng. Measuring invariances in deep networks. In NIPS,
2010.
[14] B. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code
for natural images. Nature, 1996.
[15] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley Interscience, 2001.
[16] A. Hyvarinen. Estimation of non-normalized statistical models using score matching. JMLR, 2005.
[17] K. Kavukcuoglu, M.A. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic filter maps. In CVPR, 2009.
[18] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layerwise training of deep networks. In
NIPS, 2007.
[19] V. Nair and G. Hinton. 3D object recognition with deep belief nets. In NIPS, 2009.
[20] Y. Bengio and Y. LeCun. Scaling learning algorithms towards AI. In Large-Scale Kernel Machines, 2007.
[21] R. Salakhutdinov and H. Larochelle. Efficient learning of Deep Boltzmann Machines. In AISTATS, 2010.
[22] R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. LIBLINEAR: A library for large linear
classification. JMLR, 9:1871?1874, 2008.
[23] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
2006.
[24] M. Ranzato and G. Hinton. Modeling pixel means and covariances using factorized third-order boltzmann
machines. In CVPR, 2010.
[25] K. Yu and T. Zhang. Improved local coordinate coding using local tangents. In ICML, 2010.
[26] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009.
[27] A. Saxe, M. Bhand, Z. Chen, P. W. Koh, B. Suresh, and A. Y. Ng. On random weights and unsupervised
feature learning. In Workshop: Deep Learning and Unsupervised Feature Learning (NIPS), 2010.
[28] D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network.
Technical report, University of Montreal, 2009.
[29] P. Berkes and L. Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal
of Vision, 2005.
[30] R. Fergus A. Torralba and W. T. Freeman. 80 million tiny images: a large dataset for non-parametric
object and scene recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
9
| 4136 |@word multitask:1 cnn:11 version:1 crucially:1 rgb:2 covariance:2 hsieh:1 decorrelate:1 innermost:1 liblinear:1 initial:1 contains:3 score:2 selecting:1 daniel:1 tuned:2 document:2 interestingly:1 outperforms:1 existing:1 activation:3 informative:1 cheap:1 depict:1 update:1 alone:1 greedy:2 fewer:2 leaf:1 selected:1 stationary:1 intelligence:1 plane:1 parametrization:1 provides:1 location:2 toronto:1 zhang:1 constructed:1 ik:1 competitiveness:1 consists:2 combine:1 interscience:1 manner:1 indeed:1 ica:5 multi:1 untying:4 salakhutdinov:2 relying:1 steinkraus:1 freeman:1 automatically:1 kamm:1 little:1 cpu:1 window:1 increasing:1 factorized:1 what:1 tying:10 substantially:1 unified:1 finding:1 transformation:3 tie:2 exactly:4 unwanted:1 rm:2 classifier:7 demonstrates:1 platt:1 unit:57 enjoy:2 organize:1 producing:1 before:2 dropped:1 local:28 approximately:1 ls2:1 specifying:1 suggests:1 pangwei:1 ease:1 limited:5 range:4 jarrett:1 lecun:5 enforces:1 practice:1 block:2 implement:2 x3:4 digit:2 suresh:1 significantly:5 gabor:1 matching:2 pre:1 regular:1 downsampled:1 cannot:1 unlabeled:10 layered:1 selection:1 map:36 demonstrated:1 go:1 x32:5 importantly:2 fill:1 lamblin:1 coordinate:3 pt:1 play:1 dbns:1 alexis:1 goodfellow:1 trend:1 recognition:13 expensive:3 labeled:5 observed:1 role:2 wang:1 capture:2 calculate:1 region:4 ensures:2 connected:2 ranzato:3 trade:1 benjamin:1 trained:9 solving:1 purely:3 basis:5 completely:1 translated:1 easily:1 finetuning:18 darpa:1 various:2 train:5 stacked:1 distinct:1 fast:1 describe:1 activate:1 neighborhood:2 outside:1 stanford:3 larger:1 cvpr:3 ability:1 statistic:2 topographic:5 emergence:1 itself:1 advantage:1 net:2 propose:1 neighboring:2 loop:1 date:1 flexibility:1 achieve:2 poorly:1 scalability:1 convergence:2 produce:1 adam:1 object:10 help:2 andrew:4 develop:1 pose:1 montreal:1 quocle:2 implemented:1 c:1 resemble:1 come:1 larochelle:2 cnns:44 filter:7 lcc:2 viewing:1 enable:1 saxe:3 require:3 fix:1 preliminary:1 elevation:2 repertoire:1 adjusted:1 extension:1 hold:3 achieves:3 torralba:1 estimation:1 outperformed:1 applicable:1 label:1 successfully:3 always:1 rather:2 avoid:2 varying:2 pretrain:1 contrast:2 zca:2 inference:1 entire:1 typically:1 unlikely:1 hidden:5 bhand:1 transformed:1 comprising:1 pixel:7 translational:8 classification:10 overall:1 constrained:2 special:3 art:1 softmax:1 equal:1 field:24 never:1 having:6 ng:5 comprise:1 identical:2 x4:1 look:4 unsupervised:18 icml:4 yu:1 report:2 stimulus:1 employ:1 few:1 randomly:2 oja:1 simultaneously:1 densely:1 ve:3 packer:1 phase:5 freedom:1 organization:2 highly:2 investigate:3 possibility:1 cdbn:1 partial:2 necessary:1 respective:1 orthogonal:3 vely:1 old:2 overcomplete:8 instance:1 modeling:1 linesearch:1 cover:1 disadvantage:1 contiguous:1 measuring:1 cost:2 stacking:1 uniform:1 krizhevsky:1 azimuth:2 osindero:1 reported:2 serena:1 contract:1 off:1 lee:3 pool:6 together:3 na:4 w1:2 central:1 satisfied:1 choose:1 huang:1 tile:11 simard:1 stark:1 potential:1 nonlinearities:1 stride:1 tica:34 coding:5 twin:1 includes:1 satisfy:1 collobert:1 performed:2 break:1 root:2 view:1 try:1 competitive:3 start:1 minimize:1 pang:1 square:4 accuracy:17 convolutional:18 efficiently:1 yield:1 generalize:1 weak:1 raw:2 vincent:2 kavukcuoglu:2 produced:1 lighting:5 published:2 tended:1 sharing:1 rbms:1 naturally:1 rbm:3 sampled:2 dataset:7 knowledge:1 color:2 improves:1 dimensionality:1 back:1 higher:2 supervised:10 wei:1 improved:3 though:1 furthermore:2 just:1 binocular:2 stage:1 until:1 autoencoders:2 overfit:1 x96:2 overlapping:4 olshausen:1 effect:2 concept:1 normalized:2 hence:5 attractive:1 adjacent:2 visualizing:2 during:3 self:1 tt:3 demonstrate:1 performs:1 orthogonalization:8 image:27 wise:1 novel:1 superior:1 rotation:5 qp:2 empirically:1 million:2 belong:1 honglak:1 ai:3 tuning:3 grid:1 pm:1 outlined:1 language:2 whitening:1 base:3 berkes:1 own:1 optimizing:3 apart:1 massively:1 selectivity:2 outperforming:1 rotationally:1 greater:1 fortunately:2 additional:1 signal:2 ii:1 full:4 multiple:4 reduces:3 technical:2 faster:1 calculation:1 cross:1 chia:1 cifar:10 long:1 lin:1 coded:3 promotes:1 prediction:1 scalable:1 regression:1 whitened:3 vision:1 yeung:1 represent:2 kernel:1 achieved:4 cell:2 addition:1 fine:1 diagram:1 wkj:4 source:1 w2:2 probably:1 pooling:23 subject:1 tend:1 call:3 near:1 constraining:1 bengio:6 xj:4 architecture:11 reduce:2 idea:1 haffner:1 vik:3 motivated:1 gb:1 fa8650:1 cause:1 pretraining:16 matlab:1 deep:20 useful:1 generally:1 amount:1 ang:1 extensively:1 backpropagated:1 category:2 reduced:1 http:1 occupy:1 coates:1 per:1 diverse:2 taught:1 group:2 key:2 four:1 monitor:1 localize:1 drawn:2 clarity:1 preprocessed:1 v1:3 ram:1 run:1 angle:1 fourth:1 place:1 patch:6 jiquan:1 scaling:2 entirely:1 capturing:1 layer:31 hi:7 courville:2 fan:1 orthogonality:6 constraint:9 constrain:1 precisely:1 untied:10 n3:1 encodes:1 sharply:1 scene:1 nearby:1 speed:5 layerwise:1 performing:1 relatively:3 department:1 jngiam:1 combination:1 battle:1 across:4 smaller:1 beneficial:1 modification:1 biologically:1 quoc:1 making:2 invariant:6 restricted:1 iccv:1 koh:2 taken:1 computationally:4 visualization:1 previously:1 discus:2 turn:1 icdar:1 needed:1 tractable:1 end:2 tiling:6 available:1 hierarchical:1 away:3 enforce:1 v2:1 generic:1 batch:1 robustness:1 altogether:1 rp:1 original:2 top:2 include:1 ensure:1 giving:1 build:1 especially:1 establish:1 approximating:1 objective:2 sxs:1 receptive:23 parametric:1 traditional:1 hoyer:2 gradient:5 thank:1 parametrized:1 decoder:1 spanning:1 code:4 illustration:1 rotational:1 unfortunately:2 executed:1 susceptible:1 potentially:1 favorably:1 expense:1 rise:1 implementation:3 boltzmann:3 perform:3 allowing:1 teh:1 convolution:1 neuron:1 datasets:4 enabling:1 descent:2 hinton:4 overcoming:1 wraparound:1 introduced:1 david:1 pair:2 learned:11 x50:1 zhenghao:2 nip:4 able:5 beyond:1 usually:2 pattern:5 sparsity:2 challenge:1 program:1 built:1 rf:3 belief:4 suitable:1 natural:8 raina:1 zhu:1 scheme:2 library:1 x8:4 speeding:1 prior:1 popovici:1 l2:1 acknowledgement:1 tangent:1 relative:1 nxn:1 fully:5 topography:4 localized:3 validation:9 foundation:1 degree:2 sufficient:1 wiskott:1 principle:1 vij:1 tiny:4 share:3 pi:8 translation:1 row:2 summary:2 maas:1 repeat:1 supported:1 enjoys:2 allow:3 wide:1 sparse:3 benefit:4 ghz:1 rich:2 projected:1 avoided:2 far:1 erhan:2 hyvarinen:4 transaction:1 ranganath:1 approximate:4 obtains:1 confirm:1 overfitting:3 hurri:1 norb:15 xi:1 fergus:2 spectrum:3 search:2 why:1 table:5 additionally:2 learn:18 channel:1 robust:3 transfer:1 inherently:1 nature:1 ngiam:1 bottou:2 complex:10 flattening:1 aistats:1 linearly:1 border:1 hyperparameters:1 fashion:1 grosse:1 slow:1 wiley:1 position:1 explicit:2 lie:1 tied:11 jmlr:2 third:3 learns:3 down:1 remained:1 insightful:1 learnable:7 svm:1 essential:1 intrinsic:1 mnist:1 workshop:1 texture:1 orthogonalizing:1 mcrbm:1 karhunen:1 demand:1 chen:2 easier:1 locality:3 backtracking:1 likely:1 cheaply:1 visual:1 prevents:1 partially:3 pretrained:4 chang:1 collectively:1 springer:1 corresponds:2 chance:1 nair:1 weston:1 identity:1 towards:1 hard:7 change:1 specifically:5 determined:1 reducing:3 averaging:1 total:1 invariance:24 tiled:47 orthogonalize:5 indicating:1 support:2 latter:1 rajat:1 incorporate:1 topographical:2 |
3,464 | 4,137 | A Family of Penalty Functions for Structured
Sparsity
Charles A. Micchelli?
Department of Mathematics
City University of Hong Kong
83 Tat Chee Avenue, Kowloon Tong
Hong Kong
charles [email protected]
Jean M. Morales
Department of Computer Science
University College London
Gower Street, London WC1E
England, UK
[email protected]
Massimiliano Pontil
Department of Computer Science
University College London
Gower Street, London WC1E
England, UK
[email protected]
Abstract
We study the problem of learning a sparse linear regression vector under additional conditions on the structure of its sparsity pattern. We present a family of
convex penalty functions, which encode this prior knowledge by means of a set of
constraints on the absolute values of the regression coefficients. This family subsumes the ?1 norm and is flexible enough to include different models of sparsity
patterns, which are of practical and theoretical importance. We establish some important properties of these functions and discuss some examples where they can be
computed explicitly. Moreover, we present a convergent optimization algorithm
for solving regularized least squares with these penalty functions. Numerical simulations highlight the benefit of structured sparsity and the advantage offered by
our approach over the Lasso and other related methods.
1 Introduction
The problem of sparse estimation is becoming increasingly important in machine learning and statistics. In its simplest form, this problem consists in estimating a regression vector ? ? ? Rn from a
data vector y ? Rm , obtained from the model y = X? ? + ?, where X is an m ? n matrix, which
may be fixed or randomly chosen and ? ? Rm is a vector resulting from the presence of noise. An
important rationale for sparse estimation comes from the observation that in many practical applications the number of parameters n is much larger than the data size m, but the vector ? ? is known to
be sparse, that is, most of its components are equal to zero. Under these circumstances, it has been
shown that regularization with the ?1 norm, commonly referred to as the Lasso method, provides an
effective means to estimate the underlying regression vector as well as its sparsity pattern, see for
example [4, 12, 15] and references therein.
In this paper, we are interested in sparse estimation under additional conditions on the sparsity pattern of ? ? . In other words, not only do we expect that ? ? is sparse but also that it is structured sparse,
namely certain configurations of its nonzero components are to be preferred to others. This problem
?
C.A. Micchelli is also with the Dept. of Mathematics and Statistics, State University of New York, Albany,
USA. We are grateful to A. Argyriou and Y. Ying for valuable discussions. This work was supported by NSF
Grant ITR-0312113, Air Force Grant AFOSR-FA9550, and EPSRC Grant EP/D071542/1.
1
arises is several applications, see [10] for a discussion. The prior knowledge that we consider in
this paper is that the vector |? ? |, whose components are the absolute value of the corresponding
components of ? ? , should belong to some prescribed convex set ?. For certain choices of ? this
implies a constraint on the sparsity pattern as well. For example, the set ? may include vectors with
some desired monotonicity constraints, or other constraints on the ?shape? of the regression vector.
Unfortunately, the constraint that |? ? | ? ? is nonconvex and its implementation is computational
challenging. To overcome this difficulty, we propose a novel family of penalty functions. It is based
on an extension of the ?1 norm used by the Lasso method and involves the solution of a smooth
convex optimization problem, which incorporates the structured sparsity constraints. As we shall
see, a key property of our approach is that the penalty function equals the ?1 norm of a vector ?
when |?| ? ? and it is strictly greater than the ?1 norm otherwise. This observation suggests that
the penalty function encourages the desired structured sparsity property.
There has been some recent research interest on structured sparsity, see [1, 2, 7, 9, 10, 11, 13, 16]
and references therein. Closest to our approach are penalty methods built around the idea of mixed
?1 ? ?2 norms. In particular, the group Lasso method [16] assumes that the components of the
underlying regression vector ? ? can be partitioned into prescribed groups, such that the restriction
of ? ? to a group is equal to zero for most of the groups. This idea has been extended in [10, 17]
by considering the possibility that the groups overlap according to certain hierarchical or spatially
related structures. A limitation of these methods is that they can only handle sparsity patterns forming a single connected region. Our point of view is different from theirs and provides a means to
designing more general and flexible penalty functions which maintain convexity whilst modeling
richer model structures. For example, we will demonstrate that our family of penalty functions can
model sparsity pattern forming multiple connected regions of coefficients.
The paper is organized as follows. In Section 2 we define the learning method. In particular, we
describe the associated penalty function and establish some of its important properties. In Section
3 we provide examples of penalty functions, deriving the explicit analytical form in some important
cases, namely the case that the set ? is a box or the wedge with nonincreasing coordinates. In
Section 4 we address the issue of solving the learning method numerically by means of an alternating
minimization algorithm. Finally, in Section 5 we provide numerical simulations with this method,
showing the advantage offered by our approach.
2 Learning method
In this section, we introduce the learning method and establish some important properties of the
associated penalty function. We let R++ be the positive real line and let Nn be the set of positive
integers up to n. We prescribe a convex subset ? of the positive orthant Rn++ and estimate ? ? by a
solution of the convex optimization problem
(2.1)
min kX? ? yk22 + 2??(?|?) : ? ? Rn ,
where k ? k2 denotes the Euclidean norm. The penalty function takes the form
?(?|?) = inf {?(?, ?) : ? ? ?}
(2.2)
2
?i
1 P
n
n
and the function ? : R ? R++ ? R is given by the formula ?(?, ?) = 2 i?Nn ?i + ?i .
Note that ? is convex on its domain because each of its summands are likewise convex functions.
Hence, when the set ? is convex it follows that ?(?|?) is a convex function and (2.1) is a convex
optimization problem. An essential idea behind our construction of this function, is that, for every
? ? R++ , the quadratic function ?(?, ?) provides a smooth approximation to |?| from above, which
is exact at ? = ??. We indicate this graphically in Figure 1-a. ?
This fact follows immediately
by the arithmetic-geometric mean inequality, namely (a + b)/2 ? ab. Using the same inequaln
ity it also follows that the
P Lasso problem corresponds to (2.1) when ? = R++ , that is it holds that
n
?(?|R++ ) = k?k1 := i?Nn |?i |. This important special case motivated us to consider the general
method described above. The utility of (2.2) is that upon inserting it into (2.1) results in an optimization problem over ? and ? with a continuously differentiable objective function. Hence, we have
succeeded in expressing a nondifferentiable convex objective function by one which is continuously
differentiable on its domain.
The next proposition provides a justification of the penalty function as a means to incorporate structured sparsity and establish circumstances for which the penalty function is a norm.
2
5
5
4.5
4.5
4
4
3.5
3.5
3
3
2.5
2.5
2
2
1.5
1.5
abs
?=0.75
?=1.50
1
0.5
0
?2.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
?=0.20
?=1.00
?=2.00
1
0.5
0
2.5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
(a)
(b)
Figure 1: (a): Function ?(?, ?) for some values of ?; (b): Function ?(?, ?) for some values of ?.
Proposition 2.1. For every ? ? Rn , it holds that k?k1 ? ?(?|?) and the equality holds if and
only if |?| := (|?i | : i ? Nn ) ? ?. Moreover, if ? is a nonempty convex cone then the function
?(?|?) is a norm and we have that ?(?|?) ? ?k?k1 , where ? := max{?(ek |?) : k ? Nn } and
{ek : k ? Nn } is the canonical basis of Rn .
Proof. By the arithmetic-geometric inequality we have that k?k1 ? ?(?, ?), proving the first assertion. If |?| ? ?, there exists a sequence {?k : k ? N} in ?, such that limk?? ?k = |?|.
Since ?(?|?) ? ?(?, ?k ) it readily follows that ?(?|?) ? k?k1 . Conversely, if |?| ? ?, then
there is a sequence {?k : k ? N} in ?, such ?(?, ?k ) ? k?1 k + 1/k. This inequality implies
that some subsequence of this sequence converges to a ? ? ?. Using the arithmetic-geometric
we conclude that ? = |?| and the result follows. To prove the second part, observe that if ?
is a nonempty convex cone, namely, for any ? ? ? and t ? 0 it holds that t? ? ?, we have
that ? is positive homogeneous. Indeed, making the change of variable ?? = ?/|t| we see that
?(t?|?) = |t|?(?|?). Moreover, the above inequality, ?(?|?) ? k?k1 , implies that if ?(?|?) = 0
then ? = 0. The proof of the triangle inequality follows from the homogeneity and convexity of ?,
namely ?(? + ?|?) = 2? ((? + ?)/2|?) ? ?(?|?) + ?(?|?). Finally, note that ?(?|?) ? ?k?k1
if and only if ? = max{?(?|?) : k?k1 = 1}. Since ? is convex the maximum above is achieved at
an extreme point of the ?1 unit ball.
This proposition indicates that the function ?(?|?) penalizes less vectors ? which have the property
that |?| ? ?, hence encouraging structured sparsity. Indeed, any permutation of the coordinates
of a vector ? with the above property will incur in the same or a larger value of the penalty term.
Moreover, for certain choices of the set ?, some of which we describe below, the penalty function
will encourage vectors which not only are sparse but also have sparsity patterns (1{|?i |>0} : i ?
Nn ) ? ?, where 1{?} denotes the indicator function.
We end this section by noting that a normalized version of the group Lasso penalty [16] is included
in our setting as a special case. If {J? : ? ? Nk }, k ? Nn form a partition
of the index set Nn , the
p
P
corresponding group Lasso penalty is defined as ?GL (?) = ??Nk |J? | k?J? k2 , where, for every
J ? Nn , we use the notation ?J = (?j : j ? J). It is a easy matter to verify that ?GL (?) = ?(?|?)
for ? = {? : ? ? Rn++ , ?j = ?? , j ? J? , ? ? Nk , ?? > 0}.
3 Examples of the penalty function
We proceed to discuss some examples of the set ? ? Rn++ which may be used in the design of the
penalty function ?(?|?). All but the first example fall into the category that ? is a polyhedral cone,
that is ? = {? : ? ? Rn++ , A? ? 0}, where A is an m ? n matrix. Thus, in view of Proposition 2.1
the function ?(?|?) is a norm.
The first example corresponds to the prior knowledge that the magnitude of the components of the
regression vector should be in some prescribed intervals.
Example
3.1. We choose a, b ? Rn , 0 < a ? b and define the corresponding box as B[a, b] :=
N
[ai , bi ].
i?Nn
The theorem below establishes the form of the box penalty; see also [8, 14] for related penalty
functions. To state our result, we define, for every t ? R, the function (t)+ = max(0, t).
3
Theorem 3.1. We have that
X 1
1
2
2
(ai ? |?i |)+ +
(|?i | ? bi )+ .
?(?|B[a, b]) = k?k1 +
2ai
2bi
i?Nn
Moreover, the components of the vector ?(?) := argmin{?(?, ?) : ? ? B[a, b]} are given by the
equations ?i (?) = |?i | + (ai ? |?i |)+ ? (|?i | ? b)+ , i ? Nn .
P
Proof. Since ?(?|B[a, b]) =
i?Nn ?(?i |[ai , bi ]) it suffices to establish the result in the case
n = 1. We shall show that if a, b, ? ? R, a ? b then
?(?|[a, b]) = |?| +
1
1
(a ? |?|)2+ + (|?| ? b)2+ .
2a
2b
(3.1)
Since both sides of the above equation are continuous functions of ? it suffices to prove this equation
for ? ? R\{0}. In this case, the function ?(?, ?) is strictly convex in the second argument, and so,
has a unique minimum in R++ at ? = |?|, see also Figure 1-b. Moreover, if |?| ? a the constrained
minimum occurs at ? = a, whereas if |?| ? b, it occurs at ? = b. This establishes the formula for
?(?). Consequently, we have that
1 ?2
1 ?2
+ a 1{|?|<a} +
+ b 1{|?|>b}.
?(?|[a, b]) = |?|1{a?|?|?b} +
2 a
2 b
Equation (3.1) now follows by a direct computation.
Note that the function in equation (3.1) is a concatenation of two quadratic functions, connected
together with a linear function. Thus, the box penalty will favor sparsity only for a = 0, case that is
defined by a limiting argument.
The second example implements the prior knowledge that the coordinates of the vector ? are ordered
in a non increasing fashion.
Example 3.2. We define the wedge as W = {? : ? ? Rn++ , ?j ? ?j+1 , j ? Nn?1 }.
We say that a partition J = {J? : ? ? Nk } of Nn is contiguous if for all i ? J? , j ? J?+1 ,
? ? Nk?1 , it holds that i < j. For example, if n = 3, partitions {{1, 2}, {3}} and {{1}, {2}, {3}}
are contiguous but {{1, 3}, {2}} is not.
Theorem 3.2. For every ? ? (R\{0})n there is a unique contiguous partition J = {J? : ? ? Nk }
of Nn , k ? Nn , such that
Xp
|J? | k?J? k2 .
(3.2)
?(?|W ) =
??Nk
Moreover, the components of the vector ?(?) = argmin{?(?, ?) : ? ? W } are given by
k?J k2
?j (?) = p ? , j ? J? , ? ? Nk
|J? |
(3.3)
and, for every ? ? Nk and subset K ? J? formed by the first k < |J? | elements of J? , it holds that
k?J \K k2
k?K k2
?
>p ?
.
k
|J? | ? k
(3.4)
The partition J appearing in the theorem is determined by the set of inequalities ?j ? ?j+1 which
are an equality at the minimum. This set is identified by examining the Karush-Kuhn-Tucker optimality conditions [3] of the optimization problem (2.2) for ? = W . The detailed proof is reported
in the supplementary material. Equations (3.3) and (3.4) indicate a strategy to compute the partition
associated with a vector ?. We explain how to do this in Section 4.
An interesting property of the Wedge penalty is that it has the form of a group Lasso penalty (see
the discussion at the end of Section 2) with groups not fixed a-priori but depending on the location
of the vector ?. The groups are the elements of the partition J and are identified by certain convex
4
constraints on the
?vector ?. For example, for n = 2 we obtain that ?(?|W ) = k?k1 if |?1 | > |?2 |
and ?(?|W ) = 2k?k2 otherwise. For n = 3, we have that
?
if |?1 | > |?2 | > |?3 |
J = {{1}, {2}, {3}}
?
p 1,
? k?k
?
2
2
2
2
2
?
J = {{1, 2}, {3}}
?
? 2(?1 + ?2 ) + |?3 |, if |?1 | ? |?2 | and ?1 + ?2 > 2?3
?(?|W ) =
p
2
2
2
2
2
?
J = {{1}, {2, 3}}
?
? |?1 | + 2(?2 + ?3 ), if |?2 | ? |?3 | and 2?1 > ?2 + ?3
?
?
p
?
2
2
2
3(?1 + ?2 + ?3 ),
otherwise
J = {{1, 2, 3}}
where we have also reported the partition involved in each case.
The next example is an extension of the wedge set which is inspired by previous work on the group
Lasso estimator with hierarchically overlapping groups [17]. It models vectors whose magnitude is
ordered according to a graphical structure. Within this context, the wedge corresponds to the set
associated with a line graph.
Example 3.3. We let A be the incidence matrix of a directed graph and choose ? = {? : ? ?
Rn++ , A? ? 0}.
We have confirmed that Theorem 3.2 extends to the case that the graph is a tree but the general case
is yet to be understood. We postpone this discussion to a future occasion.
Next, we note that the wedge may equivalently be expressed as the constraint that the difference
vector D1 (?) := (?j+1 ??j : j ? Nn?1 ) is less than or equal to zero. Our next example extends this
k
observation by using the higher order difference
operator, which is given by the formula D (?) =
P
?j+k + ??Nk (?1)? k? ?j+k?? : j ? Nn?k .
Example 3.4. For every k ? Nn we define the set W k := {? : ? ? Rn++ , Dk (?) ? 0}.
The corresponding penalty ?(?|W k ) encourages vectors whose sparsity pattern is concentrated on
at most k different contiguous regions. The case k = 1 essentially corresponds to the wedge,
while the case k = 2 includes vectors which have a convex ?profile? and whose sparsity pattern is
concentrated either on the first elements of the vector, on the last, or on both.
We end this section by discussing a useful construction which may be applied to generate new
penalty functions from available ones. It is obtained by composing a set ? ? Rk++ with a linear
transformation, modeling the sum of the components of a vector, across
P the elements of a prescribed
partition {P? : ? ? Nk } of Nn . That is, we let ? = {? : ? ? Rn++ , ( j?P? ?j : ? ? Nk ) ? ?}. We
use this construction in the composite wedge experiments in Section 5.
4 Optimization method
In this section, we address the issue of implementing the learning method (2.1) numerically. Since
the penalty function ?(?|?) is constructed as the infimum of a family of quadratic regularizers,
the optimization problem (2.1) reduces to a simultaneous minimization over the vectors ? and ?.
For a fixed ? ? ?, the minimum over ? ? Rn is a standard Tikhonov regularization and can
be solved directly in terms of a matrix inversion. For a fixed ?, the minimization over ? ? ?
requires computing the penalty function (2.2). These observations naturally suggests an alternating
minimization algorithm, which has already been considered in special cases in [1]. To describe our
algorithm we choose ? > 0 and introduce the mapping ?? : Rn ? Rn++ , whose i-th coordinate at
p
? ? Rn is given by ??i (?) = ?i2 + ?. For ? ? (R\{0})n , we also let ?(?) = argmin{?(?, ?) :
? ? ?}. The alternating minimization algorithm is defined as follows: choose, ?0 ? ? and, for
k ? N, define the iterates
?k
k
?
=
=
diag(?k?1 )(diag(?k?1 )X ? X + ?I)?1 y
?
k
?(? (? )).
(4.1)
(4.2)
The following theorem establishes convergence of this algorithm. Its proof is presented in the supplementary material.
Theorem 4.1. If the set ? is convex and, for all a, b ? R with 0 < a < b, the set ?a,b := [a, b]n ?? is
a nonempty, compact subset of the interior of ? then the iterations (4.1)?(4.2) converges to the vector
5
Initialization: k ? 0
Input: ? ? Rn ; Output: J1 , . . . , Jk
for t = 1 to n do
Jk+1 ? {t}; k ? k + 1
k?J
k2
k?J k2
while k > 1 and ? k?1 ? ? k
|Jk?1 |
|Jk |
Jk?1 ? Jk?1 ? Jk ; k ? k ? 1
end
end
Figure 2: Iterative algorithm to compute the wedge penalty
?(?) := argmin ky ? X?k22 + 2??(?? (?)|?) : ? ? Rn . Moreover, any convergent subsequence
of the sequence {? 1? : ? ? N} converges to a solution of the optimization problem (2.1).
The most challenging step in the alternating algorithm is the computation of the vector ?k . Fortunately, if ? is a second order cone, problem (2.2) defining the penalty function ?(?|?) may be
reformulated as a second order cone program (SOCP), see e.g. [5]. To see this, we introduce an
additional variable t ? Rn and note that
(
)
X
?(?|?) = min
ti + ?i : k(2?i , ti ? ?i )k2 ? ti + ?i , ti ? 0, i ? Nn , ? ? ? .
i?Nn
In particular, in all examples in Section 3, the set ? is formed by linear constraints and, so, problem
(2.2) is an SOCP. We may then use available tool-boxes to compute the solution of this problem.
However, in special cases the computation of the penalty function may be significantly facilitated by
using the analytical formulas derived in Section 3. Here, for simplicity we describe how to do this
in the case of the wedge penalty. For?this purpose we say that a vector ? ? Rn is admissible if, for
?
every k ? Nn , it holds that k?Nk k2 / k ? k?k2 / n.
The proof of the next lemma is straightforward and we do not elaborate on the details.
?
?
Lemma 4.1. If ? ? Rn and ? ? Rp are admissible and k?k2 / n ? k?k2 / p then (?, ?) is
admissible.
The iterative algorithm presented in Figure 2 can be used to find the partition J = {J? : ? ? Nk }
and, so, the vector ?(?) described in Theorem 3.2. The algorithm processes the components of
vector ? in a sequential manner. Initially, the first component forms the only set in the partition.
After the generic iteration t ? 1, where the partition is composed of k sets, the index of the next
components, t, is put in a new set Jk+1 . Two cases can occur: the means of the squares of the sets
are in strict descending order, or this order is violated by the last set. The latter is the only case
that requires further action, so the algorithm merges the last two sets and repeats until the sets in
the partition are fully ordered. Note that, since the only operation performed by the algorithm is
the merge of admissible sets, Lemma 4.1 ensures that after each step t the current partition satisfies
the conditions (3.4). Moreover, the while loop ensures
p that after eachpstep the current partition
satisfies, for every ? ? Nk?1 , the constraints k?J? k2 |J? | > k?J?+1 k2 |J?+1 |. Thus, the output
of the algorithm is the partition J defined in Theorem 3.2. In the actual implementation of the
algorithm, the means of squares of each set can be saved. This allows us to compute the mean of
squares of a merged set as a weighted mean, which is a constant time operation. Since there are
n ? 1 consecutive terms in total, this is also the maximum number of merges that the algorithm can
perform. Each merge requires exactly one additional test, so we can conclude that the running time
of the algorithm is linear.
5 Numerical simulations
In this section we present some numerical simulations with the proposed method. For simplicity,
we consider data generated noiselessly from y = X? ? , where ? ? ? R100 is the true underlying
regression vector, and X is an m ? 100 input matrix, m being the sample size. The elements of X
are generated i.i.d. from the standard normal distribution, and the columns of X are then normalized
such that their ?2 norm is 1. Since we consider the noiseless case, we solve the interpolation problem
min{?(?) : y = X?}, for different choices of the penalty function ?. In practice, we solve problem
(2.1) for a tiny value of the parameter ? = 10?8 , which we found to be sufficient to ensure that the
6
350
400
Lasso
Box?A
Box?B
Box?C
300
Model error
Model error
250
200
150
700
Lasso
Wedge
GL?lin
350
100
500
250
200
150
15
18
20
25
Sample size
50
75
0
12
100
15
18
(a)
20
25
Sample size
50
75
0
12
100
15
18
(b)
5000
2000
Lasso
W?2
Wedge
GL?lin
75
100
1500
1000
Lasso
W?3
Wedge
GL?lin
70
60
Model error
3000
50
(c)
2000
Model error
Lasso
C?Wedge
GL?ind
GL?hie
GL?con
20
25
Sample size
80
2500
4000
Model error
300
100
50
0
12
400
200
100
50
Lasso
Wedge
GL?lin
600
Model error
300
50
40
30
20
1000
500
10
0
12
15
18
20
25
Sample size
50
75
100
0
12
15
18
20
25
Sample size
50
75
100
0
22
25
28
30
35
Sample size
50
75
100
(d)
(e)
(f)
Figure 3: Comparison between different penalty methods: (a) Box vs. Lasso; (b,c) Wedge vs. Hierarchical group Lasso; (d) Composite wedge; (e) Convex; (f) Cubic. See text for more information
error term in (2.1) is negligible at the minimum. All experiments were repeated 50 times, generating
each time a new matrix X. In the figures we report the average of the model error E[k?? ? ? ? k22 ] of
the vector ?? learned by each method, as a function of the sample size m. In the following, we discuss
a series of experiments, corresponding to different choices for the model vector ? ? and its sparsity
pattern. In all experiments, we solved the optimization problem (2.1) with the algorithm presented
in Section 4. Whenever possible we solved step (4.2) using the formulas derived in Section 3 and
resorted to the solver CVX (http://cvxr.com/cvx/) in the other cases.
Box. In the first experiment the model is 10-sparse, where each nonzero component, in a random
position, is an integer uniformly sampled in the interval [?10, 10]. We wish to show that the more
accurate the prior information about the model is, the more precise the estimate will be. We use
a box penalty (see Theorem 3.1) constructed ?around? the model, imagining that an oracle tells us
that each component |?i? | is bounded within an interval. We consider three boxes B[a, b] of different
sizes, namely ai = (r ? |?i? |)+ and bi = (|?i? | ? r)+ and radii r = 5, 1 and 0.1, which we denote as
Box-A, Box-B and Box-C, respectively. We compare these methods with the Lasso ? see Figure 3-a.
As expected, the three box penalties perform better. Moreover, as the radius of a box diminishes,
the amount of information about the true model increases, and the performance improves.
Wedge. In the second experiment, we consider a regression vector, whose components are nonincreasing in absolute value and only a few are nonzero. Specifically, we choose a 10-sparse vector:
?j? = 11 ? j, if j ? N10 and zero otherwise. We compare the Lasso, which makes no use of such
ordering information, with the wedge penalty ?(?|W ) (see Example 3.2 and Theorem 3.2) and the
hierarchical group Lasso
P in [17], which both make use of such information. For the group Lasso
we choose ?(?) = ??N100 ||?J? ||, with J? = {?, ? + 1, . . . , 100}, ? ? N100 . These two methods
are referred to as ?Wedge? and ?GL-lin? in Figure 3-b, respectively. As expected both methods
improve over the Lasso, with ?GL-lin? being the best of the two. We further tested the robustness
of the methods, by adding two additional nonzero components with value of 10 to the vector ? ? in a
random position between 20 and 100. This result, reported in Figure 3-c, indicates that ?GL-lin? is
more sensitive to such a perturbation.
Composite wedge. Next we consider a more complex experiment, where the regression vector is
sparse within different contiguous regions P1 , . . . , P10 , and the ?1 norm on one region is larger than
the ?1 norm on the next region. We choose sets Pi = {10(i ? 1) + 1, . . . , 10i}, i ? N10 and
generate a 6-sparse vector ? ? whose i-th nonzero element has value 31 ? i (decreasing) and is in
a random
this prior knowledge by choosing ?(?|?) with
position in Pi , for i ? N6 . We encode
? = ? ? R100 : ||?Pi ||1 ? k?Pi+1 ||1 , i ? N9 . This method constraints the sum of the sets to be
nonincreasing and may be interpreted as the composition of the wedge set with an average operation
across the sets Pi , see the discussion at the end of Section 3. This method, which is referred to as ?CWedge? in Figure 3-d, is compared to the Lasso and to three other versions of the group Lasso. The
7
25
10
20
8
15
6
10
4
5
2
0
0
25
10
20
8
15
6
10
4
5
2
0
0
Figure 4: Lasso vs. penalty ?(?|W 2 ) (left) and ?(?|W 3 ) (Right); see text for more information.
first is a standard group Lasso with the nonoverlapping groups Ji = Pi , i ? N10 , thus encouraging
the presence of sets of zero elements, which is useful because there are 4 such sets. The second is a
variation of the hierarchical group Lasso discussed above with Ji = ?10
j=i Pj , i ? N10 . A problem
with these approaches is that the ?2 norm is applied at the level of the individual sets Pi , which does
not promote sparsity within these sets. To counter this effect we can enforce contiguous nonzero
patterns within each of the Pi , as proposed by [10]. That is, we consider as the groups the sets
formed by all sequences of q ? N9 consecutive elements at the beginning or at the end of each of the
sets Pi , for a total of 180 groups. These three groupings will be referred to as ?GL-ind?, ?GL-hie??,
?GL-con? in Figure 3-d, respectively. This result indicates the advantage of ?C-Wedge? over the
other methods considered. In particular, the group Lasso methods fall behind our method and the
Lasso, with ?GL-con? being slight better than ?GL-ind? and ?GL-hie?. Notice also that all group
Lasso methods gradually diminish the model error until they have a point for each dimension, while
our method and the Lasso have a steeper descent, reaching zero at a number of points which is less
than half the number of dimensions.
Convex and Cubic. To show the flexibility of our framework, we consider two further examples
of sparse regression vectors with additional structured properties. In the first example, most of the
components of this vector are zero, but the first and the last few elements follow a discrete convex
trend. Specifically, we choose ? ? = (52 , 42 , 32 , 22 , 1, 0, . . . , 0, 1, 22, 32 , 42 , 52 ) ? R100 . In this
case, we expect the penalty function ?(?|W 2 ) to outperform the Lasso, because it favors vectors
with convex shape. Results are shown in Figure3-e, where this penalty is named ?W-2?. In lack
of other specific methods to impose this convex shape constraint, and motivating by the fact that
the first few components decrease, we compare it with two methods that favors a learned vector
that is decreasing: the Wedge and the group Lasso with Jk = {k, . . . , 100} for k ? N100 . These
methods and the Lasso fail to use the prior knowledge of convexity, and are outperformed by using
the constraint set W 2 . The second example considers the case where |? ? | ? W 3 , namely the
differences of the second order are decreasing. This vector is constructed from the cubic polynomial
p(t) = ?t(t?1.5)(t+6.5). The polynomial is evaluated at 100 equally spaced (0.1) points, starting
from ?7. The resulting vector starts with 5 nonzero components and has then a bump of another
15 elements. We use our method with the penalty ?(?|W 3 ), which is referred to as ?W-3? in the
Figure. The model error, compared again with ?W-1? and group Lasso linear, is shown in Figure
3-f. Finally, Figure 4 displays the regression vector found by the Lasso and the vector learned by
?W-2? (left) and by the Lasso and ?W-3? (right), in a single run with sample size of 15 and 35,
respectively. The estimated vectors (green) are superposed to the true vector (black). Our method
provides a better estimate than the Lasso in both cases.
Conclusion
We proposed a family of penalty functions that can be used to model structured sparsity in linear
regression. We provided theoretical, algorithmic and computational information about this new
class of penalty functions. Our theoretical observations highlight the generality of this framework
to model structured sparsity. An important feature of our approach is that it can deal with richer
model structures than current approaches while maintaining convexity of the penalty function. Our
practical experience indicates that these penalties perform well numerically, improving over state
of the art penalty methods for structure sparsity, suggesting that our framework is promising for
applications. In the future, it would be valuable to extend the ideas presented here to learning
nonlinear sparse regression models. There is also a need to clarify the rate of convergence of the
algorithm presented here.
8
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243?272, 2008.
[2] R.G. Baraniuk, V. Cevher, M.F. Duarte, and C. Hegde. Model-based compressive sensing.
Information Theory, IEEE Transactions on, 56(4):1982 ?2001, 2010.
[3] D. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[4] P.J. Bickel, Y. Ritov, and A.B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
Annals of Statistics, 37:1705?1732, 2009.
[5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[6] J.M. Danskin. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641?664, 1966.
[7] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proceedings of the
26th Annual International Conference on Machine Learning, pages 417?424. ACM, 2009.
[8] L. Jacob. Structured priors for supervised learning in computational biology. 2009. Ph.D.
Thesis.
[9] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In International Conference on Machine Learning (ICML 26), 2009.
[10] R. Jenatton, J.-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing
norms. arXiv:0904.3523v2, 2009.
[11] S. Kim and E.P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. Technical report, 2009. arXiv:0909.1373.
[12] K. Lounici. Sup-norm convergence rate and sign concentration property of Lasso and Dantzig
estimators. Electronic Journal of Statistics, 2:90?102, 2008.
[13] K. Lounici, M. Pontil, A.B Tsybakov, and S. van de Geer. Taking advantage of sparsity in
multi-task learning. In Proc. of the 22nd Annual Conference on Learning Theory (COLT),
2009.
[14] A.B. Owen. A robust hybrid of lasso and ridge regression. In Prediction and discovery: AMSIMS-SIAM Joint Summer Research Conference, Machine and Statistical Learning: Prediction
and Discovery, volume 443, page 59, 2007.
[15] S.A. van de Geer. High-dimensional generalized linear models and the Lasso. Annals of
Statistics, 36(2):614, 2008.
[16] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society, Series B (Statistical Methodology), 68(1):49?67, 2006.
[17] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite
absolute penalties. Annals of Statistics, 37(6A):3468?3497, 2009.
9
A
Appendix
In this appendix we provide the proof of Theorems 3.2 and 4.1.
A.1 Proof of Theorem 3.2
Before proving the theorem we require some additional notation. Given any two disjoint subsets
J, K ? Nn we define the region
2
k?K k22
n k?J k2
.
>
QJ,K = ? : ? ? R ,
|J|
|K|
Note that the boundary of this region is determined by the zero set of a homogeneous polynomial of
degree two. We also need the following construction.
Definition A.1. For every subset S ? Nn?1 we set k = |S| + 1 and label the elements of S in
increasing order as S = {j? : ? ? Nk?1 }. We associate with the subset S a contiguous partition
of Nn , given by J (S) = {J? : ? ? Nk }, where we define J? := {j??1 + 1, j? } and set j0 = 0 and
jk = n.
A subset S of Nn?1 also induces two regions in Rn which play a central role in the identification of
the wedge penalty. First, we describe the region which ?crosses over? the induced partition J (S).
This is defined to be the set
\
(A.1)
QJ? ,J?+1 : ? ? Nk?1 .
OS :=
In other words, ? ? OS if the average of the square of its components within each region J? strictly
decreases with ?. The next region which is essential in our analysis is the ?stays within? region,
induced by the partition J (S). This region requires the notation J?,q := {j : j ? J? , j ? q} and is
defined by the equation
o
\n
QJ? ,J?,q : q ? J? , ? ? Nk ,
(A.2)
IS :=
where Q denotes the closure of the set Q. In other words, all vectors ? within this region have the
property that, for every set J? ? J (S), the average of the square of a first segment of components
of ? within this set is not greater than the average over J? . We note that if S is the empty set the
above notation should be interpreted as OS = Rn and
\
IS = {QNn ,Nq : q ? Nn }.
We also introduce, for every S ? Nn?1 the sets
US := OS ? IS ? (R\{0})n .
We shall prove the following slightly more general version the Theorem 3.2
Theorem A.1. The collection of sets U := {US : S ? Nn?1 } forms a partition of (R\{0})n . For
each ? ? (R\{0})n there is a unique S ? Nn?1 such that ? ? US , and
Xp
|J? | k?J? k2 ,
(A.3)
?(?|W ) =
??Nk
where k = |S| + 1. Moreover, the components of the vector ?(?) := argmin{?(?, ?) : ? ? W } is
given by the equations ?j (?) = ?? , j ? J? , ? ? Nk , where
k?J k2
?? = p ? .
|J? |
(A.4)
Proof. First, let us observe that there are n ? 1 inequality constraints defining W . It readily follows
that all vectors in this constraint set are regular, in the sense of optimization theory, see [3, p. 279].
Hence, we can appeal to [3, Prop. 3.3.4, p. 316 and Prop. 3.3.6, p. 322], which state that ? ? Rn++
10
is a solution to the minimum problem determined by the wedge penalty, if and only if there exists a
vector ? = (?i : i ? Nn?1 ) with nonnegative components such that
?
?j2
+ 1 + ?j?1 ? ?j = 0, j ? Nn ,
?2j
(A.5)
where we set ?0 = ?n = 0. Furthermore, the following complementary slackness conditions hold
true
?j (?j+1 ? ?j ) = 0, j ? Nn?1 .
(A.6)
To unravel these equations, we let S := {j : ?j > ?j+1 , j ? Nn?1 }, which is the subset of indexes
corresponding to the constraints that are not tight. When k ? 2, we express this set in the form
{j? : ? ? Nk?1 } where k = |S| + 1.
As explained in Definition A.1, the set S induces the partition J (S) = {J? : ? ? Nk } of Nn . When
k = 1 our notation should be interpreted to mean that S is empty and the partition J (S) consists
only of Nn . In this case, it is easy to solve the equations (A.5) and (A.6). In fact, all components of
the vector ? have a common value, say ? > 0, and by summing both sides of equation (A.5) over
j ? Nn we obtain that ?2 = P
k?k22 /n. Moreover, summing both sides of the same equation over
j ? Nq we obtain that ?q = ? j?Nq ?j2 /?2 + q and, since ?q ? 0 we conclude that ? ? IS = US .
We now consider the case that k ? 2. Hence, the vector ? has equal components on each subset
J? , which we denote by ?? , ? ? Nk?1 . The definition of the set S implies that the ?? are strictly
decreasing and equation (A.6) implies that ?j = 0, for every j ? S. Summing both sides of equation
(A.5) over j ? J? we obtain that
1 X 2
? 2
?j + |J? | = 0
??
j?J?
from which equation (A.4) follows. Since the ?? are strictly decreasing, we conclude that ? ? OS .
Moreover, choosing q ? J? and summing both sides of equations (A.5) over j ? J?,q we obtain that
0 ? ?q = ?
k?J?,q k22
+ |J?,q |
?2?
which implies that ? ? QJ? ,J?,q . Since this holds for every q ? J? and ? ? Nk we conclude that
? ? IS and therefore, it follows that ? ? US .
In summary, we have shown that ? ? US . In particular, this implies that the collection of sets U
covers (R\{0})n . Next, we show that the elements of U are disjoint. To this end, we observe that,
the computation described above can be reversed. That is to say, conversely for any S ? Nn?1 and
? ? US we conclude that the vectors ? and ? define above solve the equations (A.5) and (A.6).
Since the wedge penalty function is strictly convex we know that equations (A.5) and (A.6) have a
unique solution. Now, if ? ? US ? US ? then it must follow that ? = ?? . Consequently, since the
vectors ? and ?? are a constant on any element of their respective partitions J (S) and J (S ? ), strictly
decreasing from one element to the next in those partition, it must be the case that S1 = S2 .
We note that if some components of ? are zero we may compute ?(?|?) as a limiting process, since
the function ?(?|?) is continuous.
Proof of Theorem 4.1 We divide the proof into several steps. To this end, we define
E? (?, ?) := ky ? X?k22 + 2??(?? (?), ?)
and let ?(?) := argmin{E? (?, ?) : ? ? Rn }.
Step 1. We define two sequences, ?k = E? (? k , ?k?1 ) and ?k = E? (? k , ?k ) and observe, for any
k ? 2, that
?k+1 ? ?k ? ?k ? ?k?1 .
(A.7)
These inequalities follow directly from the definition of the alternating algorithm, see equations (4.1)
and (4.2).
Step 2. We define the compact set B = {? : ? ? Rn , k?k1 ? ?1 }. From the first inequality in
Proposition 2.1, k?k1 ? ?(?|?), and inequality (A.7) we conclude, for every k ? N, that ? k ? B.
11
Step 3. We define a function g : Rn ? R at ? ? Rn as
g(?) = min {E? (?, ?(?? (?))) : ? ? Rn } .
We claim that g is continuous on B. In fact, there exists a constant ? > 0 such that, for every
? 1 , ? 2 ? B, it holds that
|g(? 1 ) ? g(? 2 )| ? ?k?(?? (? 1 )) ? ?(?? (? 2 ))k? .
(A.8)
The essential ingredient in the proof of this inequality is the fact that by our hypothesis on the set
? there exists constant a and b such that, for all ? ? B, ?(?? (?)) ? [a, b]n . This fact follows by
Danskin?s Theorem [6].
Step 4. By step 2, there exists a subsequence {? k? : ? ? N} which converges to ?? ? B and, for all
? ? Rn and ? ? ?, it holds that
? ?(?? (?)))
? ? E? (?, ?(?? (?))),
?
E? (?,
? ?(?? (?)))
? ? E? (?,
? ?).
E? (?,
(A.9)
Indeed, from step 1 we conclude that there exists ? ? R++ such that
lim ?k = lim ?k = ?.
k??
k??
Under our hypothesis the mapping ? 7? ?(?) is continuous for ? ? (R\{0})n , we conclude that
?
lim ?k? = ?(?? (?)).
???
By the definition of the alternating algorithm, we have, for all ? ? Rn and ? ? ?, that
?k+1 = E? (? k+1 , ?k ) ? E? (?, ?k ),
?k = E? (? k , ?k ) ? E? (? k , ?).
From this inequality we obtain, passing to limit, inequalities (A.9).
? ?(?? (?))
? is a stationary point. Indeed, since ? is admissible, by step 3,
Step 5. The vector (?,
? ? int(?). Therefore, since E? is continuously differentiable this claim follows from step
?(?? (?))
4.
Step 6. The alternating algorithm converges. This claim follows from the fact that E? is strictly
convex. Hence, E? has a unique global minimum in Rn ? ?, which in virtue of inequalities (A.9) is
? ?(?? (?))).
?
attained at (?,
The last claim in the theorem follows from the fact that the set {?(?) : ? > 0} is bounded and the
function ?(?) is continuous.
12
| 4137 |@word kong:2 version:3 inversion:1 polynomial:3 norm:16 nd:1 closure:1 tat:1 simulation:4 jacob:2 configuration:1 series:2 current:3 com:2 incidence:1 yet:1 must:2 readily:2 numerical:4 partition:24 j1:1 shape:3 v:3 stationary:1 half:1 nq:3 beginning:1 fa9550:1 provides:5 iterates:1 location:1 zhang:1 constructed:3 direct:1 yuan:1 consists:2 prove:3 polyhedral:1 manner:1 introduce:4 expected:2 indeed:4 p1:1 multi:3 inspired:1 decreasing:6 encouraging:2 actual:1 considering:1 increasing:2 solver:1 provided:1 estimating:1 moreover:13 underlying:3 notation:5 bounded:2 argmin:6 interpreted:3 compressive:1 whilst:1 transformation:1 every:16 ti:4 exactly:1 rm:2 k2:19 uk:4 unit:1 grant:3 bertsekas:1 positive:4 negligible:1 understood:1 before:1 limit:1 becoming:1 merge:2 interpolation:1 black:1 therein:2 initialization:1 dantzig:2 suggests:2 challenging:2 conversely:2 bi:5 directed:1 practical:3 unique:5 practice:1 implement:1 postpone:1 pontil:4 j0:1 significantly:1 composite:4 boyd:1 vert:1 word:3 regular:1 interior:1 selection:3 operator:1 put:1 context:1 superposed:1 descending:1 restriction:1 hegde:1 graphically:1 straightforward:1 starting:1 convex:27 unravel:1 simplicity:2 immediately:1 estimator:2 d1:1 argyriou:2 deriving:1 vandenberghe:1 rocha:1 ity:1 proving:2 handle:1 coordinate:4 justification:1 variation:1 limiting:2 annals:3 construction:4 play:1 exact:1 programming:1 homogeneous:2 designing:1 prescribe:1 hypothesis:2 associate:1 element:14 trend:1 jk:10 ep:1 epsrc:1 role:1 solved:3 region:15 ensures:2 connected:3 ordering:1 counter:1 decrease:2 valuable:2 convexity:4 grateful:1 solving:2 segment:1 tight:1 incur:1 upon:1 basis:1 triangle:1 r100:3 joint:1 massimiliano:1 effective:1 london:4 describe:5 tell:1 choosing:2 jean:1 whose:7 larger:3 richer:2 supplementary:2 say:4 solve:4 otherwise:4 favor:3 statistic:6 advantage:4 differentiable:3 sequence:6 analytical:2 ucl:2 propose:1 inserting:1 j2:2 loop:1 flexibility:1 inducing:1 ky:2 convergence:3 empty:2 generating:1 converges:5 depending:1 ac:2 c:2 involves:1 come:1 implies:7 indicate:2 kuhn:1 wedge:27 radius:2 merged:1 saved:1 guided:1 material:2 implementing:1 require:1 suffices:2 karush:1 proposition:5 extension:2 strictly:8 clarify:1 hold:11 around:2 considered:2 diminish:1 normal:1 mapping:2 algorithmic:1 bump:1 claim:4 bickel:1 consecutive:2 purpose:1 estimation:4 albany:1 diminishes:1 outperformed:1 proc:1 label:1 sensitive:1 grouped:2 city:1 establishes:3 tool:1 weighted:1 minimization:5 kowloon:1 reaching:1 encode:2 derived:2 indicates:4 kim:1 sense:1 duarte:1 nn:41 qnn:1 initially:1 interested:1 issue:2 flexible:2 colt:1 priori:1 constrained:1 special:4 art:1 equal:5 evgeniou:1 biology:1 yu:1 icml:1 promote:1 future:2 others:1 report:2 few:3 randomly:1 composed:1 homogeneity:1 individual:1 maintain:1 ab:2 interest:1 possibility:1 extreme:1 behind:2 regularizers:1 nonincreasing:3 accurate:1 succeeded:1 encourage:1 experience:1 respective:1 tree:2 euclidean:1 divide:1 penalizes:1 desired:2 theoretical:3 cevher:1 column:1 modeling:2 assertion:1 contiguous:7 cover:1 subset:9 examining:1 motivating:1 reported:3 international:2 siam:2 stay:1 together:1 continuously:3 thesis:1 again:1 central:1 choose:8 huang:1 ek:2 zhao:1 suggesting:1 socp:2 nonoverlapping:1 de:2 subsumes:1 includes:1 coefficient:2 matter:1 int:1 explicitly:1 audibert:1 performed:1 view:2 steeper:1 sup:1 start:1 xing:1 square:6 air:1 formed:3 likewise:1 spaced:1 metaxas:1 identification:1 confirmed:1 n10:4 explain:1 simultaneous:2 whenever:1 definition:5 tucker:1 involved:1 naturally:1 associated:4 proof:12 con:3 sampled:1 knowledge:6 lim:3 improves:1 organized:1 noiselessly:1 jenatton:1 higher:1 attained:1 supervised:1 follow:3 methodology:1 ritov:1 evaluated:1 box:17 lounici:2 generality:1 furthermore:1 until:2 nonlinear:2 overlapping:1 lack:1 o:5 slackness:1 infimum:1 scientific:1 usa:1 effect:1 k22:6 normalized:2 verify:1 true:4 regularization:2 hence:6 equality:2 spatially:1 alternating:7 nonzero:7 i2:1 deal:1 ind:3 encourages:2 hong:2 occasion:1 generalized:1 ridge:1 demonstrate:1 novel:1 charles:2 common:1 ji:2 volume:1 belong:1 hie:3 discussed:1 slight:1 theirs:1 numerically:3 extend:1 expressing:1 composition:1 cambridge:1 ai:6 mathematics:3 summands:1 closest:1 recent:1 inf:1 tikhonov:1 certain:5 nonconvex:1 inequality:14 discussing:1 p10:1 minimum:7 additional:7 greater:2 fortunately:1 impose:1 arithmetic:3 multiple:1 reduces:1 smooth:2 technical:1 england:2 bach:1 cross:1 lin:8 equally:1 prediction:2 regression:17 circumstance:2 essentially:1 noiseless:1 arxiv:2 iteration:2 achieved:1 whereas:1 interval:3 limk:1 strict:1 induced:2 chee:1 incorporates:1 integer:2 presence:2 yk22:1 noting:1 enough:1 easy:2 lasso:45 identified:2 idea:4 avenue:1 itr:1 qj:4 motivated:1 utility:1 penalty:52 reformulated:1 york:1 proceed:1 passing:1 action:1 useful:2 detailed:1 amount:1 tsybakov:2 ph:1 concentrated:2 induces:2 category:1 simplest:1 generate:2 http:1 outperform:1 nsf:1 canonical:1 notice:1 sign:1 estimated:1 disjoint:2 discrete:1 shall:3 express:1 group:27 key:1 pj:1 resorted:1 graph:4 cone:5 sum:2 run:1 facilitated:1 baraniuk:1 named:1 extends:2 family:7 electronic:1 cvx:2 appendix:2 summer:1 convergent:2 display:1 quadratic:3 oracle:1 annual:2 nonnegative:1 occur:1 constraint:16 argument:2 prescribed:4 min:5 optimality:1 structured:15 department:3 according:2 ball:1 across:2 slightly:1 increasingly:1 partitioned:1 making:1 s1:1 explained:1 gradually:1 equation:19 discus:3 nonempty:3 fail:1 know:1 end:9 available:2 operation:3 hotmail:1 observe:4 hierarchical:5 v2:1 generic:1 enforce:1 appearing:1 robustness:1 rp:1 n9:2 assumes:1 denotes:3 include:2 running:1 ensure:1 graphical:1 maintaining:1 gower:2 wc1e:2 k1:12 establish:5 society:1 micchelli:3 objective:2 already:1 occurs:2 strategy:1 concentration:1 reversed:1 concatenation:1 street:2 athena:1 nondifferentiable:1 figure3:1 considers:1 index:3 ying:1 equivalently:1 unfortunately:1 danskin:2 implementation:2 design:1 perform:3 observation:5 descent:1 orthant:1 defining:2 extended:1 precise:1 rn:33 perturbation:1 namely:7 merges:2 learned:3 address:2 below:2 pattern:12 sparsity:27 program:1 built:1 max:4 green:1 royal:1 overlap:2 difficulty:1 force:1 regularized:1 n100:3 indicator:1 hybrid:1 improve:1 n6:1 text:2 prior:8 geometric:3 discovery:2 afosr:1 fully:1 expect:2 highlight:2 rationale:1 mixed:1 permutation:1 limitation:1 interesting:1 ingredient:1 d071542:1 degree:1 offered:2 sufficient:1 xp:2 tiny:1 pi:9 morale:2 summary:1 supported:1 gl:18 last:5 repeat:1 side:5 fall:2 taking:1 absolute:4 sparse:14 benefit:1 van:2 overcome:1 dimension:2 boundary:1 commonly:1 collection:2 transaction:1 compact:2 selector:1 preferred:1 monotonicity:1 global:1 summing:4 conclude:9 subsequence:3 continuous:5 iterative:2 promising:1 robust:1 composing:1 improving:1 imagining:1 complex:1 domain:2 diag:2 hierarchically:1 s2:1 noise:1 profile:1 cvxr:1 repeated:1 complementary:1 referred:5 elaborate:1 fashion:1 cubic:3 tong:1 position:3 explicit:1 wish:1 admissible:5 formula:5 theorem:19 rk:1 specific:1 showing:1 sensing:1 appeal:1 dk:1 virtue:1 grouping:1 essential:3 exists:6 sequential:1 adding:1 importance:1 magnitude:2 kx:1 nk:25 forming:2 expressed:1 ordered:3 corresponds:4 satisfies:2 acm:1 obozinski:1 prop:2 consequently:2 owen:1 change:1 included:1 determined:3 specifically:2 uniformly:1 lemma:3 total:2 geer:2 college:2 latter:1 arises:1 violated:1 incorporate:1 dept:1 tested:1 |
3,465 | 4,138 | Generating more realistic images using gated MRF?s
Marc?Aurelio Ranzato
Volodymyr Mnih
Geoffrey E. Hinton
Department of Computer Science
University of Toronto
{ranzato,vmnih,hinton}@cs.toronto.edu
Abstract
Probabilistic models of natural images are usually evaluated by measuring performance on rather indirect tasks, such as denoising and inpainting. A more direct way to evaluate a generative model is to draw samples from it and to check
whether statistical properties of the samples match the statistics of natural images.
This method is seldom used with high-resolution images, because current models
produce samples that are very different from natural images, as assessed by even
simple visual inspection. We investigate the reasons for this failure and we show
that by augmenting existing models so that there are two sets of latent variables,
one set modelling pixel intensities and the other set modelling image-specific pixel
covariances, we are able to generate high-resolution images that look much more
realistic than before. The overall model can be interpreted as a gated MRF where
both pair-wise dependencies and mean intensities of pixels are modulated by the
states of latent variables. Finally, we confirm that if we disallow weight-sharing
between receptive fields that overlap each other, the gated MRF learns more efficient internal representations, as demonstrated in several recognition tasks.
1
Introduction and Prior Work
The study of the statistical properties of natural images has a long history and has influenced many
fields, from image processing to computational neuroscience [1]. In this work we focus on probabilistic models of natural images. These models are useful for extracting representations [2, 3, 4]
that can be used for discriminative tasks and they can also provide adaptive priors [5, 6, 7] that can be
used in applications like denoising and inpainting. Our main focus, however, will be on improving
the quality of the generative model, rather than exploring its possible applications.
Markov Random Fields (MRF?s) provide a very general framework for modelling natural images.
In an MRF, an image is assigned a probability which is a normalized product of potential functions,
with each function typically being defined over a subset of the observed variables. In this work we
consider a very versatile class of MRF?s in which potential functions are defined over both pixels
and latent variables, thus allowing the states of the latent variables to modulate or gate the effective
interactions between the pixels. This type of MRF, that we dub gated MRF, was proposed as an
image model by Geman and Geman [8]. Welling et al. [9] showed how an MRF in this family1
could be learned for small image patches and their work was extended to high-resolution images by
Roth and Black [6] who also demonstrated its success in some practical applications [7].
Besides their practical use, these models were specifically designed to match the statistical properties
of natural images, and therefore, it seems natural to evaluate them in those terms. Indeed, several
authors [10, 7] have proposed that these models should be evaluated by generating images and
1
Product of Student?s t models (without pooling) may not appear to have latent variables but each potential
can be viewed as an infinite mixture of zero-mean Gaussians where the inverse variance of the Gaussian is the
latent variable.
1
checking whether the samples match the statistical properties observed in natural images. It is,
therefore, very troublesome that none of the existing models can generate good samples, especially
for high-resolution images (see for instance fig. 2 in [7] which is one of the best models of highresolution images reported in the literature so far). In fact, as our experiments demonstrate the
generated samples from these models are more similar to random images than to natural images!
When MRF?s with gated interactions are applied to small image patches, they actually seem to
work moderately well, as demonstrated by several authors [11, 12, 13]. The generated patches have
some coherent and elongated structure and, like natural image patches, they are predominantly very
smooth with sudden outbreaks of strong structure. This is unsurprising because these models have
a built-in assumption that images are very smooth with occasional strong violations of smoothness [8, 14, 15]. However, the extension of these patch-based models to high-resolution images by
replicating filters across the image has proven to be difficult. The receptive fields that are learned
no longer resemble Gabor wavelets but look random [6, 16] and the generated images lack any of
the long range structure that is so typical of natural images [7]. The success of these methods in
applications such as denoising is a poor measure of the quality of the generative model that has been
learned: Setting the parameters to random values works almost as well for eliminating independent
Gaussian noise [17], because this can be done quite well by just using a penalty for high-frequency
variation.
In this work, we show that the generative quality of these models can be drastically improved by
jointly modelling both pixel mean intensities and pixel covariances. This can be achieved by using
two sets of latent variables, one that gates pair-wise interactions between pixels and another one that
sets the mean intensities of pixels, as we already proposed in some earlier work [4]. Here, we show
that this modelling choice is crucial to make the gated MRF work well on high-resolution images.
Finally, we show that the most widely used method of sharing weights in MRF?s for high-resolution
images is overly constrained. Earlier work considered homogeneous MRF?s in which each potential
is replicated at all image locations. This has the subtle effect of making learning very difficult
because of strong correlations at nearby sites. Following Gregor and LeCun [18] and also Tang and
Eliasmith [19], we keep the number of parameters under control by using local potentials, but unlike
Roth and Black [6] we only share weights between potentials that do not overlap.
2
Augmenting Gated MRF?s with Mean Hidden Units
A Product of Student?s t (PoT) model [15] is a gated MRF defined on small image patches that
can be viewed as modelling image-specific, pair-wise relationships between pixel values by using
the states of its latent variables. It is very good at representing the fact that two-pixel have very
similar intensities and no good at all at modelling what these intensities are. Failure to model the
mean also leads to impoverished modelling of the covariances when the input images have nonzero mean intensity. The covariance RBM (cRBM) [20] is another model that shares the same
limitation since it only differs from PoT in the distribution of its latent variables: The posterior over
the latent variables is a product of Bernoulli distributions instead of Gamma distributions as in PoT.
We explain the fundamental limitation of these models by using a simple toy example: Modelling
two-pixel images using a cRBM with only one binary hidden unit, see fig. 1.
This cRBM assumes that the conditional distribution over the input is a zero-mean Gaussian with a
covariance that is determined by the state of the latent variable. Since the latent variable is binary, the
cRBM can be viewed as a mixture of two zero-mean full covariance Gaussians. The latent variable
uses the pairwise relationship between pixels to decide which of the two covariance matrices should
be used to model each image. When the input data is pre-proessed by making each image have zero
mean intensity (the empirical histogram is shown in the first row and first column), most images lie
near the origin because most of the times nearby pixels are strongly correlated. Less frequently we
encounter edge images that exhibit strong anti-correlation between the pixels, as shown by the long
tails along the anti-diagonal line. A cRBM could model this data by using two Gaussians (first row
and second column): one that is spherical and tight at the origin for smooth images and another one
that has a covariance elongated along the anti-diagonal for structured images.
If, however, the whole set of images is normalized by subtracting from every pixel the mean value
of all pixels over all images (second row and first column), the cRBM fails at modelling structured
images (second row and second column). It can fit a Gaussian to the smooth images by discovering
2
Figure 1: In the first row, each image is zero mean. In the second row, the whole set of data points is centered
but each image can have non-zero mean. The first column shows 8x8 images picked at random from natural
images. The images in the second column are generated by a model that does not account for mean intensity.
The images in the third column are generated by a model that has both ?mean? and ?covariance? hidden units.
The contours in the first column show the negative log of the empirical distribution of (tiny) natural two-pixel
images (x-axis being the first pixel and the y-axis the second pixel). The plots in the other columns are toy
examples showing how each model could represent the empirical distribution using a mixture of Gaussians
with components that have one of two possible covariances (corresponding to the state of a binary ?covariance?
latent variable). Models that can change the means of the Gaussians (mPoT and mcRBM) can represent better
structured images (edge images lie along the anti-diagonal and are fitted by the Gaussians shown in red) while
the other models (PoT and cRBM) fail, overall when each image can have non-zero mean.
the direction of strong correlation along the main diagonal, but it is very likely to fail to discover the
direction of anti-correlation, which is crucial to represent discontinuities, because structured images
with different mean intensity appear to be evenly spread over the whole input space.
If the model has another set of latent variables that can change the means of the Gaussian distributions in the mixture (as explained more formally below and yielding the mPoT and mcRBM models),
then the model can represent both changes of mean intensity and the correlational structure of pixels
(see last column). The mean latent variables effectively subtract off the relevant mean from each
data-point, letting the covariance latent variable capture the covariance structure of the data. As
before, the covariance latent variable needs only to select between two covariance matrices.
In fact, experiments on real 8x8 image patches confirm these conjectures. Fig. 1 shows samples
drawn from PoT and mPoT. mPoT (and similarly mcRBM [4]) is not only better at modelling zero
mean images but it can also represent images that have non zero mean intensity well.
We now describe mPoT, referring the reader to [4] for a detailed description of mcRBM. In PoT [9]
the energy function is:
X
1
E PoT (x, hc ) =
[hci (1 + (Ci T x)2 ) + (1 ? ?) log hci ]
(1)
2
i
where x is a vectorized image patch, hc is a vector of Gamma ?covariance? latent variables, C is
a filter bank matrix and ? is a scalar parameter. The joint probability over input pixels and latent
variables is proportional to exp(?E PoT (x, hc )). Therefore, the conditional distribution over the
input pixels is a zero-mean Gaussian with covariance equal to:
?c = (Cdiag(hc )C T )?1 .
(2)
In order to make the mean of the conditional distribution non-zero, we define mPoT as the normalized product of the above zero-mean Gaussian that models the covariance and a spherical covariance
Gaussian that models the mean. The overall energy function becomes:
E mPoT (x, hc , hm ) = E PoT (x, hc ) + E m (x, hm )
3
(3)
Figure 2: Illustration of different choices of weight-sharing scheme for a RBM. Links converging to one latent
variable are filters. Filters with the same color share the same parameters. Kinds of weight-sharing scheme: A)
Global, B) Local, C) TConv and D) Conv. E) TConv applied to an image. Cells correspond to neighborhoods
to which filters are applied. Cells with the same color share the same parameters. F) 256 filters learned by
a Gaussian RBM with TConv weight-sharing scheme on high-resolution natural images. Each filter has size
16x16 pixels and it is applied every 16 pixels in both the horizontal and vertical directions. Filters in position
(i, j) and (1, 1) are applied to neighborhoods that are (i, j) pixels away form each other. Best viewed in color.
where hm is another set of latent variables that are assumed to be Bernoulli distributed (but other
distributions could be used). The new energy term is:
X
1
T
E m (x, hm ) = xT x ?
hm
(4)
j Wj x
2
j
yielding the following conditional distribution over the input pixels:
p(x|hc , hm ) = N (?(W hm ), ?), ? = (?c + I)?1
(5)
with ?c defined in eq. 2. As desired, the conditional distribution has non-zero mean2 .
Patch-based models like PoT have been extended to high-resolution images by using spatially localized filters [6]. While we can subtract off the mean intensity from independent image patches to
successfully train PoT, we cannot do that on a high-resolution image because overlapping patches
might have different mean. Unfortunately, replicating potentials over the image ignoring variations
of mean intensity has been the leading strategy to date [6]3 . This is the major reason why generation
of high-resolution images is so poor. Sec. 4 shows that generation can be drastically improved by
explicitly accounting for variations of mean intensity, as performed by mPoT and mcRBM.
3
Weight-Sharing Schemes
By integrating out the latent variables, we can write the density function of any gated MRF as a
normalized product of potential functions (for mPoT refer to eq. 6). In this section we investigate
different ways of constraining the parameters of the potentials of a generic MRF.
Global: The obvious way to extend a patch-based model like PoT to high-resolution images is to
define potentials over the whole image; we call this scheme global. This is not practical because
1) the number of parameters grows about quadratically with the size of the image making training
too slow, 2) we do not need to model interactions between very distant pairs of pixels since their
dependence is negligible, and 3) we would not be able to use the model on images of different size.
Conv: The most popular way to handle big images is to define potentials on small subsets of
variables (e.g., neighborhoods of size 5x5 pixels) and to replicate these potentials across space while
2
The need to model the means was clearly recognized in [21] but they used conjunctive latent features that
simultaneously represented a contribution to the ?precision matrix? in a specific direction and the mean along
that same direction.
3
The success of PoT-like models in Bayesian denoising is not surprising since the noisy image effectively
replaces the reconstruction term from the mean hidden units (see eq. 5), providing a set of noisy mean intensities
that are cleaned up by the patterns of correlation enforced by the covariance latent variables.
4
sharing their parameters at each image location [23, 24, 6]. This yields a convolutional weightsharing scheme, also called homogeneous field in the statistics literature. This choice is justified
by the stationarity of natural images. This weight-sharing scheme is extremely concise in terms of
number of parameters, but also rather inefficient in terms of latent representation. First, if there are
N filters at each location and these filters are stepped by one pixel then the internal representation
is about N times overcomplete. The internal representation has not only high computational cost,
but it is also highly redundant. Since the input is mostly smooth and the parameters are the same
across space, the latent variables are strongly correlated as well. This inefficiency turns out to be
particularly harmful for a model like PoT causing the learned filters to become ?random? looking
(see fig 3-iii). A simple intuition follows from the equivalence between PoT and square ICA [15]. If
the filter matrixQC of eq. 1 is square and invertible, we can marginalize out the latent variables and
write: p(y) = i S(yi ), where yi = Ci T x and S is a Student?s t distribution. In other words, there
is an underlying assumption that filter outputs are independent. However, if the filters of matrix C
are shifted and overlapping versions of each other, this clearly cannot be true. Training PoT with the
Conv weight-sharing scheme forces the model to find filters that make filter outputs as independent
as possible, which explains the very high-frequency patterns that are usually discovered [6].
Local: The Global and Conv weight-sharing schemes are at the two extremes of a spectrum of
possibilities. For instance, we can define potentials on a small subset of input variables but, unlike
Conv, each potential can have its own set of parameters, as shown in fig. 2-B. This is called local,
or inhomogeneous field. Compared to Conv the number of parameters increases only slightly but
the number of latent variables required and their redundancy is greatly reduced. In fact, the model
learns different receptive fields at different locations as a better strategy for representing the input,
overall when the number of potentials is limited (see also fig. 2-F).
TConv: Local would not allow the model to be trained and tested on images of different resolution,
and it might seem wasteful not to exploit the translation invariant property of images. We therefore
advocate the use of a weight-sharing scheme that we call tiled-convolutional (TConv) shown in
fig. 2-C and E [18]. Each filter tiles the image without overlaps with copies of itself (i.e. the stride
equals the filter diameter). This reduces spatial redundancy of latent variables and allows the input
images to have arbitrary size. At the same time, different filters do overlap with each other in order
to avoid tiling artifacts. Fig. 2-F shows filters that were (jointly) learned by a Restricted Boltzmann
Machine (RBM) [29] with Gaussian input variables using the TConv weight-sharing scheme.
4
Experiments
We train gated MRF?s with and without mean hidden units using different weight-sharing schemes.
The training procedure is very similar in all cases. We perform approximate maximum likelihood by
using Fast Persistence Contrastive Divergence (FPCD) [25] and we draw samples by using Hybrid
Monte Carlo (HMC) [26]. Since all latent variables can be exactly marginalized out we can use
HMC on the free energy (negative logarithm of the marginal distribution over the input pixels). For
mPoT this is:
X
X
1
1 T
mPoT
T
2
T
F
(x) = ? log(p(x))+const. =
k,i
? log(1+ (Cik xk ) )+ x x?
2
2
log(1+exp(Wjk xk )) (6)
k,j
where the index k runs over spatial locations and xk is the k-th image patch. FPCD keeps samples,
called negative particles, that it uses to represent the model distribution. These particles are all
updated after each weight update. For each mini-batch of data-points a) we compute the derivative
of the free energy w.r.t. the training samples, b) we update the negative particles by running HMC for
one HMC step consisting of 20 leapfrog steps. We start at the previous set of negative particles and
use as parameters the sum of the regular parameters and a small perturbation vector, c) we compute
the derivative of the free energy at the negative particles, and d) we update the regular parameters
by using the difference of gradients between step a) and c) while the perturbation vector is updated
using the gradient from c) only. The perturbation is also strongly decayed to zero and is subject to a
larger learning rate. The aim is to encourage the negative particles to explore the space more quickly
by slightly and temporarily raising the energy at their current position. Note that the use of FPCD
as opposed to other estimation methods (like Persistent Contrastive Divergence [27]) turns out to be
crucial to achieve good mixing of the sampler even after training. We train on mini-batches of 32
samples using gray-scale images of approximate size 160x160 pixels randomly cropped from the
Berkeley segmentation dataset [28]. We perform 160,000 weight updates decreasing the learning
by a factor of 4 by the end of training. The initial learning rate is set to 0.1 for the covariance
5
Figure 3: 160x160 samples drawn by A) mPoT-TConv, B) mHPoT-TConv, C) mcRBM-TConv and D) PoTTConv. On the side also i) a subset of 8x8 ?covariance? filters learned by mPoT-TConv (the plot below shows
how the whole set of filters tile a small patch; each bar correspond to a Gabor fit of a filter and colors identify
filters applied at the same 8x8 location, each group is shifted by 2 pixels down the diagonal and a high-resolution
image is tiled by replicating this pattern every 8 pixels horizontally and vertically), ii) a subset of 8x8 ?mean?
filters learned by the same mPoT-TConv, iii) filters learned by PoT-Conv and iv) by PoT-TConv.
filters (matrix C of eq. 1), 0.01 for the mean parameters (matrix W of eq. 4), and 0.001 for the
other parameters (? of eq. 1). During training we condition on the borders and initialize the negative
particles at zero in order to avoid artifacts at the border of the image. We learn 8x8 filters and
pre-multiply the covariance filters by a whitening transform retaining 99% of the variance; we also
normalize the norm of the covariance filters to prevent some of them from decaying to zero during
training4 .
Whenever we use the TConv weight-sharing scheme the model learns covariance filters that mostly
resemble localized and oriented Gabor functions (see fig. 3-i and iv), while the Conv weight-sharing
scheme learns structured but poorly localized high-frequency patterns (see fig. 3-iii) [6]. The TConv
models re-use the same 8x8 filters every 8 pixels and apply a diagonal offset of 2 pixels between
neighboring filters with different weights in order to reduce tiling artifacts. There are 4 sets of filters,
each with 64 filters for a total of 256 covariance filters (see bottom plot of fig. 3). Similarly, we have
4 sets of mean filters, each with 32 filters. These filters have usually non-zero mean and exhibit
on-center off-surround and off-center on-surround patterns, see fig. 3-ii.
In order to draw samples from the learned models, we run HMC for a long time (10,000 iterations,
each composed of 20 leap-frog steps). Some samples of size 160x160 pixels are reported in fig. 3 A)D). Without modelling the mean intensity, samples lack structure and do not seem much different
from those that would be generated by a simple Gaussian model merely fitting the second order
statistics (see fig. 3 in [1] and also fig. 2 in [7]). By contrast, structure, sharp boundaries and some
simple texture emerge only from models that have mean latent variables, namely mcRBM, mPoT
and mHPoT which differs from mPoT by having a second layer pooling matrix on the squared
covariance filter outputs [11].
A more quantitative comparison is reported in table 1. We first compute marginal statistics of filter
responses using the generated images, natural images from the test set, and random images. The
statistics are the normalized histogram of individual filter responses to 24 Gabor filters (8 orientations and 3 scales). We then calculate the KL divergence between the histograms on random images
and generated images and the KL divergence between the histograms on natural images and generated images. The table also reports the average difference of energies between random images and
natural images. All results demonstrate that models that account for mean intensity generate images
4
The code used in the experiments can be found at the first author?s web-page.
6
F (R) ? F (T ) (104 )
KL(R k G)
KL(T k G)
KL(R k G) ? KL(T k G)
PoT - Conv
2.9
0.3
0.6
-0.3
PoT - TConv
2.8
0.4
1.0
-0.6
mPoT - TConv
5.2
1.0
0.2
0.8
mHPoT - TConv
4.9
1.7
0.8
0.9
mcRBM - TConv
3.5
1.5
1.0
0.5
MODEL
Table 1: Comparing MRF?s by measuring: difference of energy (negative log ratio of probabilities) between
random images (R) and test natural images (T), the KL divergence between statistics of random images (R) and
generated images (G), KL divergence between statistics of test natural images (T) and generated images (G),
and difference of these two KL divergences. Statistics are computed using 24 Gabor filters.
that are closer to natural images than to random images, whereas models that do not account for the
mean (like the widely used PoT-Conv) produce samples that are actually closer to random images.
4.1
Discriminative Experiments on Weight-Sharing Schemes
In future work, we intend to use the features discovered by the generative model for recognition.
To understand how the different weight sharing schemes affect recognition performance we have
done preliminary tests using the discriminative performance of a simpler model on simpler data. We
consider one of the simplest and most versatile models, namely the RBM [29]. Since we also aim
to test the Global weight-sharing scheme we are constrained to using fairly low resolution datasets
such as the MNIST dataset of handwritten digits [30] and the CIFAR 10 dataset of generic object
categories [22]. The MNIST dataset has soft binary images of size 28x28 pixels, while the CIFAR
10 dataset has color images of size 32x32 pixels. CIFAR 10 has 10 classes, 5000 training samples
per class and 1000 test samples per class. MNIST also has 10 classes with, on average, 6000 training
samples per class and 1000 test samples per class.
The energy function of the RBM trained on the CIFAR 10 dataset, modelling input pixels with 3
(R,G,B) Gaussian variables [31], is exactly the one shown in eq. 4; while the RBM trained on MNIST
uses logistic units for the pixels and the energy function is again the same as before but without any
quadratic term. All models are trained in an unsupervised way to approximately maximize the
likelihood in the training set using Contrastive Divergence [32]. They are then used to represent
each input image with a feature vector (mean of the posterior over the latent variables) which is
fed to a multinomial logistic classifier for discrimination. Models are compared in terms of: 1)
recognition accuracy, 2) convergence time and 3) dimensionality of the representation. In general,
assuming filters much smaller than the input image and assuming equal number of latent variables,
Conv, TConv and Local models process each sample faster than Global by a factor approximately
equal to the ratio between the area of the image and the area of the filters, which can be very large
in practice.
In the first set of experiments reported on the left of fig. 4 we study the internal representation in
terms of discrimination and dimensionality using the MNIST dataset. For each choice of dimensionality all models are trained using the same number of operations. This is set to the amount necessary
to complete one epoch over the training set using the Global model. This experiment shows that: 1)
Local outperforms all other weight-sharing schemes for a wide range of dimensionalities, 2) TConv
does not perform as well as Local probably because the translation invariant assumption is clearly
violated for these relatively small, centered, images, 3) Conv performs well only when the internal
representation is very high dimensional (10 times overcomplete) otherwise it severely underfits, 4)
Global performs well when the representation is compact but its performance degrades rapidly as
this increases because it needs more than the allotted training time. The right hand side of fig. 4
shows how the recognition performance evolves as we increase the number of operations (or training time) using models that produce a twice overcomplete internal representation. With only very
few filters Conv still underfits and it does not improve its performance by training for longer, but
Global does improve and eventually it reaches the performance of Local. If we look at the crossing
of the error rate at 2% we can see that Local is about 4 times faster than Global. To summarize, Local provides more compact representations than Conv, is much faster than Global while achieving
7
6
2.4
error rate %
5
error rate %
2.6
Global
Local
TConv
Conv
4
3
2
1
0
2.2
Global
Local
2
Conv
1.8
1000
2000
3000
4000
5000
dimensionality
6000
7000
1.6
0
8000
2
4
6
8
# flops (relative to # flops per epoch of Global model)
10
Figure 4: Experiments on MNIST using RBM?s with different weight-sharing schemes. Left: Error rate as
a function of the dimensionality of the latent representation. Right: Error rate as a function of the number of
operations (normalized to those needed to perform one epoch in the Global model); all models have a twice
overcomplete latent representation.
similar performance in discrimination. Also, Local can easily scale to larger images while Global
cannot.
Similar experiments are performed using the CIFAR 10 dataset [22] of natural images. Using the
same protocol introduced in earlier work by Krizhevsky [22], the RBM?s are trained in an unsupervised way on a subset of the 80 million tiny images dataset [33] and then ?fine-tuned? on the CIFAR
10 dataset by supervised back-propagation of the error through the linear classifier and feature extractor. All models produce an approximately 10,000 dimensional internal representation to make a
fair comparison. Models using local filters learn 16x16 filters that are stepped every pixel. Again,
we do not experiment with the TConv weight-sharing scheme because the image is not large enough
to allow enough replicas.
Similarly to fig. 3-iii the Conv weight-sharing scheme was very difficult to train and did not produce
Gabor-like features. Indeed, careful injection of sparsity and long training time seem necessary [31]
for these RBM?s. By contrast, both Local and Global produce Gabor-like filters similar to those
shown in fig. 2 F). The model trained with Conv weight-sharing scheme yields an accuracy equal
to 56.6%, while Local and Global yield much better performance, 63.6% and 64.8% [22], respectively. Although Local and Global have similar performance, training with the Local weight-sharing
scheme took under an hour while using the Global weight-sharing scheme required more than a day.
5
Conclusions and Future Work
This work is motivated by the poor generative quality of currently popular MRF models of natural
images. These models generate images that are actually more similar to white noise than to natural
images. Our contribution is to recognize that current models can benefit from 1) the addition of
a simple model of the mean intensities and from 2) the use of a less constrained weight-sharing
scheme. By augmenting these models with an extra set of latent variables that model mean intensity
we can generate samples that look much more realistic: they are characterized by smooth regions,
sharp boundaries and some simple high frequency texture. We validate our approach by comparing
the statistics of filter outputs on natural images and generated images.
In the future, we plan to integrate these MRF?s into deeper hierarchical models and to use their
internal representation to perform object recognition in high-resolution images. The hope is to
further improve generation by capturing longer range dependencies and to exploit this to better cope
with missing values and ambiguous sensory inputs.
References
[1] E.P. Simoncelli. Statistical modeling of photographic images. Handbook of Image and Video Processing,
pages 431?441, 2005.
8
[2] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons, 2001.
[3] G.E. Hinton and R. R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[4] M. Ranzato and G.E. Hinton. Modeling pixel means and covariances using factorized third-order boltzmann machines. In CVPR, 2010.
[5] M.J. Wainwright and E.P. Simoncelli. Scale mixtures of gaussians and the statistics of natural images. In
NIPS, 2000.
[6] S. Roth and M.J. Black. Fields of experts: A framework for learning image priors. In CVPR, 2005.
[7] U. Schmidt, Q. Gao, and S. Roth. A generative perspective on mrfs in low-level vision. In CVPR, 2010.
[8] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of
images. PAMI, 6:721?741, 1984.
[9] M. Welling, G.E. Hinton, and S. Osindero. Learning sparse topographic representations with products of
student-t distributions. In NIPS, 2003.
[10] S.C. Zhu and D. Mumford. Prior learning and gibbs reaction diffusion. PAMI, pages 1236?1250, 1997.
[11] S. Osindero, M. Welling, and G. E. Hinton. Topographic product models applied to natural scene statistics.
Neural Comp., 18:344?381, 2006.
[12] S. Osindero and G. E. Hinton. Modeling image patches with a directed hierarchy of markov random
fields. In NIPS, 2008.
[13] Y. Karklin and M.S. Lewicki. Emergence of complex cell properties by learning to generalize in natural
scenes. Nature, 457:83?86, 2009.
[14] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: a strategy employed by
v1? Vision Research, 37:3311?3325, 1997.
[15] Y. W. Teh, M. Welling, S. Osindero, and G. E. Hinton. Energy-based models for sparse overcomplete
representations. JMLR, 4:1235?1260, 2003.
[16] Y. Weiss and W.T. Freeman. What makes a good model of natural images? In CVPR, 2007.
[17] S. Roth and M. J. Black. Fields of experts. Int. Journal of Computer Vision, 82:205?229, 2009.
[18] K. Gregor and Y. LeCun. Emergence of complex-like cells in a temporal product network with local
receptive fields. arXiv:1006.0448, 2010.
[19] C. Tang and C. Eliasmith. Deep networks for robust visual recognition. In ICML, 2010.
[20] M. Ranzato, A. Krizhevsky, and G.E. Hinton. Factored 3-way restricted boltzmann machines for modeling
natural images. In AISTATS, 2010.
[21] N. Heess, C.K.I. Williams, and G.E. Hinton. Learning generative texture models with extended fields-ofexperts. In BMCV, 2009.
[22] A. Krizhevsky. Learning multiple layers of features from tiny images, 2009. MSc Thesis, Dept. of Comp.
Science, Univ. of Toronto.
[23] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. Phoneme recognition using time-delay
neural networks. IEEE Acoustics Speech and Signal Proc., 37:328?339, 1989.
[24] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[25] T. Tieleman and G.E. Hinton. Using fast weights to improve persistent contrastive divergence. In ICML,
2009.
[26] R.M. Neal. Bayesian learning for neural networks. Springer-Verlag, 1996.
[27] T. Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. In
ICML, 2008.
[28] http://www.cs.berkeley.edu/projects/vision/grouping/segbench/.
[29] M. Welling, M. Rosen-Zvi, and G.E. Hinton. Exponential family harmoniums with an application to
information retrieval. In NIPS, 2005.
[30] http://yann.lecun.com/exdb/mnist/.
[31] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proc. ICML, 2009.
[32] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14:1771?1800, 2002.
[33] A. Torralba, R. Fergus, and W.T. Freeman. 80 million tiny images: a large dataset for non-parametric
object and scene recognition. PAMI, 30:1958?1970, 2008.
9
| 4138 |@word version:1 eliminating:1 norm:1 seems:1 replicate:1 covariance:28 accounting:1 contrastive:5 concise:1 inpainting:2 versatile:2 initial:1 inefficiency:1 tuned:1 document:1 outperforms:1 existing:2 reaction:1 current:3 comparing:2 com:1 surprising:1 lang:1 conjunctive:1 john:1 realistic:3 distant:1 bmcv:1 designed:1 plot:3 update:4 x160:3 discrimination:3 generative:8 discovering:1 inspection:1 xk:3 sudden:1 provides:1 toronto:3 location:6 simpler:2 along:5 direct:1 become:1 persistent:2 hci:2 advocate:1 fitting:1 pairwise:1 crbm:7 ica:1 indeed:2 frequently:1 salakhutdinov:1 freeman:2 spherical:2 decreasing:1 becomes:1 conv:18 discover:1 underlying:1 project:1 factorized:1 what:2 kind:1 interpreted:1 temporal:1 berkeley:2 every:5 quantitative:1 exactly:2 classifier:2 control:1 unit:6 appear:2 before:3 negligible:1 local:20 vertically:1 severely:1 troublesome:1 approximately:3 pami:3 black:4 might:2 twice:2 frog:1 equivalence:1 limited:1 range:3 directed:1 practical:3 lecun:4 practice:1 differs:2 digit:1 procedure:1 area:2 empirical:3 gabor:7 mpot:17 persistence:1 pre:2 integrating:1 word:1 regular:2 segbench:1 cannot:3 marginalize:1 www:1 elongated:2 demonstrated:3 roth:5 center:2 missing:1 williams:1 resolution:16 x32:1 factored:1 handle:1 variation:3 updated:2 hierarchy:1 homogeneous:2 us:3 origin:2 crossing:1 recognition:10 particularly:1 geman:4 observed:2 bottom:1 capture:1 calculate:1 wj:1 region:1 ranzato:4 intuition:1 moderately:1 trained:7 tight:1 harmonium:1 basis:1 easily:1 joint:1 indirect:1 represented:1 train:4 univ:1 fast:2 effective:1 describe:1 monte:1 neighborhood:3 quite:1 widely:2 larger:2 cvpr:4 otherwise:1 statistic:11 topographic:2 jointly:2 noisy:2 itself:1 transform:1 emergence:2 hanazawa:1 took:1 reconstruction:1 subtracting:1 interaction:4 product:10 causing:1 relevant:1 neighboring:1 date:1 rapidly:1 mixing:1 poorly:1 achieve:1 description:1 validate:1 normalize:1 wjk:1 convergence:1 produce:6 generating:2 object:3 augmenting:3 eq:8 strong:5 pot:21 c:2 resemble:2 direction:5 inhomogeneous:1 filter:52 stochastic:1 centered:2 eliasmith:2 explains:1 preliminary:1 exploring:1 extension:1 considered:1 exp:2 major:1 torralba:1 estimation:1 proc:2 leap:1 currently:1 weightsharing:1 successfully:1 hope:1 clearly:3 gaussian:12 aim:2 rather:3 avoid:2 focus:2 leapfrog:1 modelling:13 check:1 bernoulli:2 likelihood:3 greatly:1 contrast:2 mrfs:1 typically:1 hidden:5 pixel:43 overall:4 orientation:1 retaining:1 plan:1 constrained:3 spatial:2 vmnih:1 initialize:1 marginal:2 field:13 equal:5 fairly:1 having:1 ng:1 look:4 unsupervised:3 icml:4 future:3 rosen:1 report:1 few:1 randomly:1 oriented:1 composed:1 gamma:2 simultaneously:1 divergence:10 individual:1 recognize:1 oja:1 consisting:1 stationarity:1 investigate:2 mnih:1 highly:1 possibility:1 multiply:1 violation:1 mixture:5 extreme:1 yielding:2 edge:2 encourage:1 closer:2 necessary:2 iv:2 harmful:1 logarithm:1 desired:1 re:1 overcomplete:6 fitted:1 instance:2 column:10 earlier:3 soft:1 modeling:4 measuring:2 restoration:1 cost:1 subset:6 krizhevsky:3 delay:1 osindero:4 too:1 unsurprising:1 zvi:1 reported:4 dependency:2 referring:1 density:1 fundamental:1 decayed:1 probabilistic:2 off:4 lee:1 invertible:1 quickly:1 squared:1 again:2 thesis:1 opposed:1 tile:2 expert:3 inefficient:1 leading:1 derivative:2 toy:2 volodymyr:1 potential:15 account:3 stride:1 student:4 sec:1 coding:1 int:1 explicitly:1 performed:2 picked:1 red:1 start:1 decaying:1 contribution:2 square:2 accuracy:2 convolutional:3 variance:2 who:1 phoneme:1 correspond:2 yield:3 identify:1 generalize:1 bayesian:3 handwritten:1 dub:1 none:1 carlo:1 comp:2 history:1 explain:1 influenced:1 reach:1 sharing:26 whenever:1 failure:2 energy:12 frequency:4 obvious:1 rbm:10 dataset:11 popular:2 color:5 dimensionality:7 underfits:2 segmentation:1 subtle:1 impoverished:1 actually:3 cik:1 back:1 supervised:1 day:1 response:2 improved:2 wei:1 evaluated:2 done:2 strongly:3 just:1 correlation:5 msc:1 hand:1 horizontal:1 web:1 overlapping:2 lack:2 propagation:1 logistic:2 quality:4 artifact:3 gray:1 grows:1 olshausen:1 effect:1 normalized:6 true:1 assigned:1 spatially:1 nonzero:1 neal:1 white:1 x5:1 during:2 ambiguous:1 fpcd:3 highresolution:1 exdb:1 complete:1 demonstrate:2 performs:2 image:129 wise:3 predominantly:1 multinomial:1 million:2 tail:1 extend:1 refer:1 surround:2 gibbs:2 smoothness:1 seldom:1 similarly:3 particle:7 replicating:3 longer:3 whitening:1 posterior:2 own:1 showed:1 perspective:1 verlag:1 binary:4 success:3 yi:2 employed:1 recognized:1 maximize:1 redundant:1 signal:1 ii:2 multiple:1 full:1 simoncelli:2 reduces:1 photographic:1 smooth:6 match:3 faster:3 x28:1 characterized:1 long:5 cifar:6 retrieval:1 converging:1 mrf:21 scalable:1 vision:4 arxiv:1 histogram:4 represent:7 iteration:1 achieved:1 cell:4 justified:1 cropped:1 whereas:1 fine:1 addition:1 crucial:3 extra:1 unlike:2 probably:1 pooling:2 subject:1 seem:4 call:2 extracting:1 near:1 constraining:1 iii:4 enough:2 bengio:1 affect:1 fit:2 ofexperts:1 reduce:1 haffner:1 whether:2 motivated:1 penalty:1 speech:1 deep:2 heess:1 useful:1 detailed:1 amount:1 category:1 diameter:1 reduced:1 generate:5 simplest:1 http:2 shifted:2 neuroscience:1 overly:1 per:5 write:2 group:1 redundancy:2 achieving:1 drawn:2 wasteful:1 prevent:1 diffusion:1 replica:1 v1:1 relaxation:1 merely:1 sum:1 enforced:1 run:2 inverse:1 almost:1 reader:1 decide:1 family:1 yann:1 patch:15 draw:3 capturing:1 layer:2 replaces:1 quadratic:1 scene:3 nearby:2 extremely:1 injection:1 relatively:1 conjecture:1 department:1 structured:5 waibel:1 poor:3 across:3 slightly:2 smaller:1 son:1 evolves:1 making:3 outbreak:1 explained:1 invariant:2 restricted:3 turn:2 eventually:1 fail:2 needed:1 letting:1 fed:1 end:1 tiling:2 gaussians:7 operation:3 apply:1 occasional:1 away:1 generic:2 hierarchical:2 batch:2 encounter:1 schmidt:1 gate:2 assumes:1 running:1 marginalized:1 const:1 exploit:2 especially:1 gregor:2 intend:1 already:1 mumford:1 receptive:4 strategy:3 dependence:1 degrades:1 diagonal:6 parametric:1 exhibit:2 gradient:4 link:1 evenly:1 stepped:2 evaluate:2 reason:2 assuming:2 besides:1 code:1 index:1 relationship:2 illustration:1 providing:1 mini:2 ratio:2 minimizing:1 difficult:3 unfortunately:1 mostly:2 hmc:5 negative:9 boltzmann:4 gated:10 allowing:1 perform:5 vertical:1 teh:1 markov:2 datasets:1 anti:5 flop:2 hinton:14 extended:3 looking:1 discovered:2 perturbation:3 arbitrary:1 sharp:2 intensity:20 introduced:1 pair:4 cleaned:1 required:2 namely:2 kl:9 raising:1 acoustic:1 coherent:1 learned:10 quadratically:1 hour:1 discontinuity:1 nip:4 able:2 bar:1 usually:3 below:2 pattern:5 sparsity:1 summarize:1 built:1 video:1 belief:1 wainwright:1 overlap:4 natural:31 force:1 hybrid:1 karklin:1 zhu:1 representing:2 scheme:25 improve:4 axis:2 x8:7 hm:7 prior:4 literature:2 epoch:3 checking:1 relative:1 generation:3 limitation:2 proportional:1 proven:1 geoffrey:1 localized:3 integrate:1 vectorized:1 bank:1 tiny:4 share:4 translation:2 row:6 last:1 copy:1 free:3 drastically:2 disallow:1 allow:2 side:2 understand:1 deeper:1 wide:1 emerge:1 sparse:3 distributed:1 benefit:1 boundary:2 contour:1 sensory:1 author:3 adaptive:1 replicated:1 far:1 welling:5 cope:1 hyvarinen:1 ranganath:1 approximate:2 compact:2 keep:2 confirm:2 global:20 handbook:1 assumed:1 discriminative:3 shikano:1 fergus:1 spectrum:1 latent:37 why:1 table:3 learn:2 nature:1 robust:1 ignoring:1 improving:1 hc:7 complex:2 bottou:1 marc:1 protocol:1 did:1 aistats:1 main:2 spread:1 aurelio:1 whole:5 noise:2 big:1 border:2 fair:1 fig:19 site:1 x16:2 slow:1 wiley:1 grosse:1 precision:1 fails:1 position:2 exponential:1 lie:2 jmlr:1 third:2 extractor:1 learns:4 wavelet:1 tang:2 down:1 specific:3 xt:1 showing:1 offset:1 grouping:1 mnist:7 effectively:2 ci:2 texture:3 mcrbm:8 karhunen:1 subtract:2 likely:1 explore:1 gao:1 visual:2 horizontally:1 temporarily:1 scalar:1 lewicki:1 springer:1 tieleman:2 conditional:5 modulate:1 viewed:4 careful:1 change:3 specifically:1 infinite:1 typical:1 determined:1 sampler:1 reducing:1 denoising:4 correlational:1 called:3 total:1 tiled:2 formally:1 select:1 allotted:1 internal:8 modulated:1 assessed:1 violated:1 dept:1 tested:1 correlated:2 |
3,466 | 4,139 | Learning the context of a category
Daniel J. Navarro
School of Psychology
University of Adelaide
Adelaide, SA 5005, Australia
[email protected]
Abstract
This paper outlines a hierarchical Bayesian model for human category learning
that learns both the organization of objects into categories, and the context in
which this knowledge should be applied. The model is fit to multiple data sets,
and provides a parsimonious method for describing how humans learn context
specific conceptual representations.
1 Introduction
Human knowledge and expertise is often tied to particular contexts. The superior memory that chess
masters have for chessboard configurations is limited to plausible games, and does not generalize
to arbitrary groupings of pieces [1]. Expert firefighters make different predictions about the same
fire depending on whether it is described as a back-burn or a to-be-controlled fire [2]. In part,
this context specificity reflects the tendency for people to organize knowledge into independent
?bundles? which may contain contradictory information, and which may be deemed appropriate
to different contexts. This phenomenon is called knowledge partitioning [2?6], and is observed
in artificial category learning experiments as well as real world situations. When people learn to
classify stimuli in an environment where there are systematic changes in the ?context? in which
observations are made, they often construct category representations that are tightly linked to the
context, and only generalize their knowledge when the context is deemed appropriate [3, 4, 6].
Context induced knowledge partitioning poses a challenge to models of human learning. As noted in
[4] many models cannot accommodate the effect, or, as discussed later in this paper, are somewhat
unsatisfying in the manner that they do so. This paper explores the possibility that Bayesian models
of human category learning can provide the missing explanation. The structure of the paper is as
follows: first, a context-sensitive Bayesian category learning model is described. This model is
then shown to provide a parsimonious and psychologically appealing account of the knowledge
partitioning effect. Following this, a hierarchical extension is introduced to the model, which allows
it to acquire abstract knowledge about the context specificity of the categories, in a manner that is
consistent with the data on human learning.
2 Learning categories in context
This section outlines a Bayesian model that is sensitive to the learning context. It extends Anderson?s
[7] rational model of categorization (RMC) by allowing the model to track the context in which
observations are made, and draw inferences about the role that context plays.
2.1 The statistical model
The central assumption in the RMC is that the learner seeks to organize his or her observations
into clusters. If zi denotes the cluster to which the ith observation is assigned, then the joint prior
1
distribution over zn = (z1 , . . . , zn ) can be specified via the Chinese restaurant process [8],
zn |? ? CRP(?).
(1)
Each cluster of observations is mapped onto a distribution over features. Feature values are denoted
by the vector xi = (xi1 , . . . , xid ), the values of the ith observation for each of the d features. When
feature values vary continuously, the RMC associates the kth cluster with a multivariate Gaussian
that has mean vector ?k and covariance matrix ?k . Setting standard conjugate priors, we obtain
xi
?k
?k
| ? k , ?k , z i = k
| ?k , ? 0 , ? 0
| ? 0 , ?0
?
?
?
Normal(?k , ?k )
Normal(?0 , ?k /?0 )
Inv-Wishart(?0 , ?0 ?1 )
(2)
This is a minor generalization of the original model, as it allows any covariance matrix (i.e., symmetric positive definite ?) and does not require the restrictive assumption that the stimulus dimensions
are independent (which would force ? to be diagonal). While independence is reasonable when
stimulus dimensions are separable [9], knowledge partitioning can occur regardless of whether dimensions are separable or integral (see [6] for details), so the more general formulation is useful.
In the RMC, labels are treated in the same way as discrete-valued features. Each cluster is associated
with a distribution over category labels. If ?i denotes the label given to the ith observation, then
?i
?k
| zi = k, ?k
| ?
? Bernoulli(?k )
? Beta(?, ?)
(3)
The ? parameter describes the extent to which items in the same cluster are allowed to have different
labels. If there are more than two labels, this generalizes to a Dirichlet-multinomial model.
Equations 1?3 define the standard RMC. The extension to handle context dependence is straightforward: contextual information is treated as an auxiliary feature, and so each cluster is linked to
a distribution over contexts. In the experiments considered later, each observation is assigned to
a context individually, which allows us to apply the exact same model for contextual features as
regular ones. Thus a very simple context model is sufficient:
ci
?k
| zi = k, ?k
| ?
?
?
Bernoulli(?k )
Beta(?, ?)
(4)
The context specificity parameter ? is analogous to ? and controls the extent to which clusters can
include observations made in different contexts. In more general contexts, a richer model would be
required to capture the manner in which context can vary.
Applying the model requires values to be chosen for ?, ?, ?, ?, ?0 , ?0 and ?0 , most of which can
be fixed in a sensible way. Firstly, since the categories do not overlap in the experiments discussed
here it makes sense to set ? = 0, which has the effect of forcing each cluster to be associated only
with one category. Secondly, human learners rarely have strong prior knowledge about the features
used in artificial category learning experiments, expressed by setting ?0 = 1 and ?0 = 3 (?0 is larger
to ensure that the priors over features always has a well defined covariance structure). Thirdly, to
approximate the fact that the experiments quickly reveal the full range of stimuli to participants,
it makes sense to set ?0 and ?0 to the empirical mean and covariances across all training items.
Having made these choices, we may restrict our attention to ? (the bias to introduce new clusters)
and ? (the bias to treat clusters as context general).
2.2 Inference in the model
Inference is performed via a collapsed Gibbs sampler, integrating out ?, ?, ? and ? and defining a
sampler only over the cluster assignments z. To do so, note that
P (zi = k|x, ?, c, z?i )
? P (xi , ?i , ci |x?i , ??i , c?i , z?i , zi = k)P (zi = k|z?i )
= P (xi |x?i , z?i , zi = k)P (?i |??i , z?i , zi = k)
P (ci |c?i , z?i , zi = k)P (zi = k|z?i )
(5)
(6)
where the dependence on the parameters that describe the prior (i.e., ?, ?, ?, ?0 , ?0 , ?0 , ?0 ) is suppressed for the sake of readability. In this expression z?i denotes the set of all cluster assignments
2
except the ith, and the normalizing term is calculated by summing Equation 6 over all possible cluster assignments k, including the possibility that the ith item is assigned to an entirely new cluster.
The conditional prior probability P (zi = k|z?i ) is
nk
n?1+?
P (zi = k|z?i ) =
?
n?1+?
if k is old
if k is new
(7)
where nk counts the number of items (not including the ith) that have been assigned to the kth
cluster. Since the context is modelled using a beta-Bernoulli model:
Z 1
(c )
n i +?
(8)
P (ci |c?i , z?i , zi = k) =
P (ci |?k , zi = k)P (?k |c?i , z?i ) d?k = k
nk + 2?
0
(c )
where nk i counts the number of observations that have been assigned to cluster k and appeared in
the same context as the ith item. A similar result applies to the labelling scheme:
Z 1
(? )
n i +?
(9)
P (?i |??i , z?i , zi = k) =
P (?i |?k , zi = k)P (?k |??i , z?i ) d?k = k
nk + 2?
0
(? )
where nk i counts the number of observations that have been assigned to cluster k and given the
same label as observation i. Finally, integrating out the mean vector ?k and covariance matrix ?k
for the feature values yields a d-dimensional multivariate t distribution (e.g., [10], ch. 3):
Z
P (xi |x?i , z?i , zi = k) =
P (xi |?k , ?k , zi = k)P (?k , ?k |x?i , z?i ) d(?k , ?k )
(10)
?
=
? ? +d
?( k2 )
d
1
??
?( 2k )(??k? ) 2 |??k | 2
1+
(xi ?
?1
??k )??k (xi
?k?
?
??k )T
!? ?k2+d
(11)
In this expression the posterior degrees of freedom for cluster k is ?k? = ?0 + nk ? d + 1 and the
? k )/(?0 + nk ), where x
? k denotes the empirical mean feature
posterior mean is ??k = (?0 ?0 + nk x
values for items in the cluster. Finally, the posterior scale matrix is
?0 + nk + 1
?0 nk
(?
xk ? ?0 )T (?
xk ? ?0 )
(12)
??k =
? 0 + Sk +
?0 + nk
(?0 + nk )(?0 + nk ? 2d + 2)
P
? k )T (xi ? x
? k ) is the sum of squares matrix around the empirical cluster mean
where Sk = (xi ? x
?
xk , and the sum in question is taken over all observations assigned to cluster k.
Taken together, Equations 6, 8, 9 and 11 suggest a simple a Gibbs sampler over the cluster assignments z. Cluster assignments zi are initialized randomly, and are then sequentially redrawn from
the conditional posterior distribution in Equation 6. For the applications in this paper, the sampler
typically converges within only a few iterations, but a much longer burn in (usually 1000 iterations,
never less than 100) was used in order to be safe. Successive samples are drawn at a lag of 10
iterations, and multiple runs (between 5 and 10) are used in all cases.
3 Application to knowledge partitioning experiments
To illustrate the behavior of the model, consider the most typical example of a knowledge partitioning experiment [3, 4, 6]. Stimuli vary along two continuous dimensions (e.g., height of a rectangle,
location of a radial line), and are organized into categories using the scheme shown in Figure 1a.
There are two categories organized into an ?inside-outside? structure, with one category (black circles/squares) occupying a region along either side of the other one (white circles/squares). The
critical characteristic of the experiment is that each stimulus is presented in a particular ?context?,
usually operationalized as an auxiliary feature not tied to the stimulus itself, such as the background
color. In Figure 1a, squares correspond to items presented in one context, and circles to items presented in the other context. Participants are trained on these items in a standard supervised categorization experiment: stimuli are presented one at a time (with the context variable), and participants
are asked to predict the category label. After making a prediction, the true label is revealed to them.
3
context 2
context sensitive participants
context 1
context insensitive participants
label A, context 1
label A, context 2
label B, context 1
label B, context 2
transfer items
76?100%
51?75%
26?50%
0?25%
(a)
(b)
Figure 1: Stimuli used in the typical knowledge partitioning design (left) and the different generalization patterns that are displayed by human learners (right). Percentages refer to the probability of
selecting category label A.
This procedure is repeated until participants can correctly label all items. At this point, participants
are shown transfer items (the crosses in Figure 1a), and asked what category label these items should
be given. No feedback is given during this phase. Critically, each transfer item is presented in both
contexts, to determine whether people generalize in a context specific way.
The basic effect, replicated across several different experiments, is that there are strong individual
differences in how people solve the problem. This leads to the two characteristic patterns of generalization shown in Figure 1b (these data are from Experiments 1 and 2A in [6]). Some participants are
context insensitive (lower two panels) and their predictions about the transfer items do not change
as a function of context. However, other participants are context sensitive (upper panels) and adopt
a very different strategy depending on which context the transfer item is presented in. This is taken
to imply [3, 4, 6] that the context sensitive participants have learned a conceptual representation in
which knowledge is ?partitioned? into different bundles, each associated with a different context.
3.1 Learning the knowledge partition
The initial investigation focused on what category representations the model learns, as a function
of ? and ?. After varying both parameters over a broad range, it was clear that there are two quite
different solutions that the model can produce, illustrated in Figure 2. In the four cluster solution
(panel b, small ?), the clusters never aggregate across items observed in different contexts. In
contrast, the three cluster solution (panel a, larger ?) is more context general, and collapses category
B into a single cluster. However, there is an interaction with ?, since large ? values drive the model
to introduce more clusters. As a result, for ? > 1 the model tends not to produce the three cluster
solution. Given that the main interest is in ?, we can fix ? such that the prior expected number of
clusters is 3.5, so as to be neutral with respect to the two solutions. Since the expected number of
Pn?1
clusters is given by ? k=0 (? + k) [11] and there are n = 40 observations, this value is ? = 0.72.
The next aim was to quantify the extent to which ? influences the relative prevalence of the four
cluster solution versus the three cluster solution. For any given partition produced by the model, the
adjusted Rand index [12] can be used to assess its similarity to the two idealized solutions (Figure 2a
and 2b). Since the adjusted Rand index measures the extent to which any given pair of items are classified in the same way by the two solutions, it is a natural measure of how close a model-generated
solution is to one of the two idealized solutions. Then, adopting an approach loosely inspired by
PAC-learning [13], two partitions were deemed to be approximately the same if the adjusted Rand
4
4 cluster solution
context 1 only
context 1 only
both contexts
context 2 only
context 2 only
posterior probability of approximate agreement
3 cluster solution
1
4 cluster solution
3 cluster solution
0.8
0.6
0.4
0.2
0
0
5
10
15
gamma
(a)
(b)
(c)
Figure 2: The two different clustering schemes produced by the context sensitive RMC, and the
values of ? that produce them (for ? fixed at 0.72). See main text for details.
index between the two exceeded 0.9. The estimated posterior probability that the model solutions
approximate either of the the two idealized partitions is plotted in Figure 2c as a function of ?.
At smaller values of ? (below about 3.7) the four cluster solution is extremely dominant whereas
at larger values the three cluster solution is preferred. Since there are approximately 1.6 ? 1035
possible partitions of 40 objects, the extent of this dominance is clearly very strong.
The fact that the model concentrates on two different but entirely sensible solutions as a function of
? is very appealing from a psychological perspective. One of the most desirable characteristics is the
fact that the partitioning of the learners knowledge is made explicit. That is, the model learns a much
more differentiated and context bound representation when ? is small, and a more context general
and less differentiated representation when ? is large. By way of comparison, the only other model
that has been shown to produce the effect is ATRIUM [14], which in its standard form consists of
a linked ?rule learning? module and an ?exemplar learning? module. In order to fit the data, the
model was modified [4] so that it starts with two rule modules and an exemplar model. During
training, the model learns to weight each of the rule modules differently depending on context,
thereby producing context specific generalizations. This provides a partial explanation of the effect,
but it is rather unsatisfying in some ways. In ATRIUM, the knowledge partition is represented via
the learned division of responsibilities between two hard coded rule modules [4]. In a very real
sense, the partition is actually hard coded into the architecture of the model. As such, ATRIUM
learns the context dependence, but not the knowledge partition itself.
3.2 Generalizing in context-specific and context-general ways
The discussion to this point shows how the value of ? shapes the conceptual knowledge that the
model acquires, but has not looked at what generalizations the model makes. However, it is straightforward to show that varying ? does allow the context sensitive RMC to capture the two generalization patterns in Figure 1. With this in mind, Figure 3 plots the generalizations made by the model
for two different levels of context specificity (? = 0 and ? = 10) and for the two different clustering
solutions. Obviously, in view of the results in Figure 2c the most interesting cases are panels (a) and
(d), since those correspond to the solutions most likely to be learned by the model, but it is useful
to consider all four cases. As is clear from inspection ? and verified by the squared correlations
listed in the Figure caption ? when ? is small the model generalizes in a context specific manner,
but when ? is large the generalizations are the same in all contexts. This happens for both clustering
solutions, which implies that ? plays two distinct but related roles, insofar as it influences the context
specificity of both the learned knowledge partition and the generalizations to new observations.
4 Acquiring abstract knowledge about context specificity
One thing missing from both ATRIUM and the RMC is an explanation for how the leaner decides
whether context specific or context general representations are appropriate. In both cases, the model
has free parameters that govern the switch between the two cases, and these parameters must be
5
?=0
context 1
(c)
(b)
context 2
4 clusters
(a)
? = 10
context 2
4 clusters
context 1
3 clusters
3 clusters
(d)
Figure 3: Generalizations made by the model. In panel (a) the model accounts for 82.1% of the
variance in the context sensitive data, but only 35.2% of the variance in the context insensitive data.
For panel (b) these numbers are 77.9% and 3.6% respectively. When ? is large the pattern reverses:
in panel (c) only 23.6% of the variance in the context sensitive data is explained, whereas 67.1% of
the context insensitive data can be accounted for. In panel (d), the numbers are 17.5% and 73.9%.
estimated from data. In the RMC, ? is a free parameter that does all the work; for ATRIUM,
four separate parameters are varied [4]. This poses the question: how do people acquire abstract
knowledge about which way to generalize? In RMC terms, how do we infer the value of ??
To answer this, note that if the context varies in a systematic fashion, an intelligent learner might
come to suspect that the context matters, and would be more likely to decide to generalize in a
context specific way. On the other hand, if there are no systematic patterns to the way that observations are distributed across contexts, then the learner should deem the context to be irrelevant and
hence decide to generalize broadly across contexts. Indeed, this is exactly what happens with human
learners. For instance, consider the data from Experiment 1 in [4]. One condition of this experiment
was a standard knowledge partitioning experiment, identical in every meaningful respect to the data
described earlier in this paper. As is typical for such experiments, knowledge partitioning was observed for at least some of the participants. In the other condition, however, the context variable was
randomized: each of the training items was assigned to a randomly chosen context. In this condition,
no knowledge partitioning was observed.
What this implies is that human learners use the systematicity of the context as a cue to determine
how broadly to generalize. As such, the model should learn that ? is small when the context varies
systematically; and similarly should learn that ? is large if the context is random. To that end, this
section develops a hierarchical extension to the model that is able to do exactly this, and shows that
it is able to capture both conditions of the data in [4] without varying any parameter values.
4.1 A hierarchical context-sensitive RMC
Extending the statistical model is straightforward: we place priors over ?, and allow the model to
infer a joint posterior distribution over the cluster assignments z and the context specificity ?. This is
closely related to other hierarchical Bayesian models of category learning [15?19]. A simple choice
of prior for this situation is the exponential distribution,
?|? ? Exponential(?)
(13)
Following the approach taken with ?, ? was fixed so as to ensure that the model has no a priori bias
to prefer either of the two solutions. When ? = 3.7 the two solutions are equally likely (Figure 2);
a value of ? = .19 ensures that this value of ? is the prior median.
6
1000
systematic context
randomized context
frequency
800
600
400
200
0
?4
?3
?2
?1
log (?)
0
1
2
10
Figure 4: Learned distributions over ? in the systematic (dark rectangles) and randomized (light
rectangles) conditions, plotted on a logarithmic scale. The dashed line shows the location of the
prior median (i.e., ? = 3.7).
Inference in the hierarchical model proceeds as before, with a Metropolis step added to resample ?.
The acceptance probabilities for the Metropolis sampler may be calculated by observing that
P (?|x, ?, c, z)
?
?
=
=
P (x, ?, c|z, ?)P (?)
P (c|z, ?)P (?)
Z
P (c|z, ?)P (?|?) d? P (?)
P (?)
K Z
Y
k=1
(14)
(15)
(16)
1
P (c(k) |?k )P (?k |?) d?k
(17)
0
=
? exp(???)
?
exp(???)
K
Y
nk !
(c=1)
B(nk
(c=1) (c=2)
!nk
!
k=1 nk
K
(c=1)
(c=2)
Y B(n
+ ?, nk
k
+ ?)
B(?, ?)
k=1
(c=2)
+ ?, nk
B(?, ?)
(c=j)
where B(a, b) = ?(a)?(b)/?(a + b) denotes the beta function, and nk
items in cluster k that appeared in context j.
+ ?)
(18)
(19)
counts the number of
4.2 Application of the extended model
To explore the performance of the hierarchical extension of the context sensitive RMC, the model
was trained on both the original, systematic version of the knowledge partitioning experiments, and
on a version with the context variables randomly permuted. The posterior distributions over ? that
this produces are shown in Figure 4. As expected, in the systematic condition the model notices the
fact that the context varies systematically as a function of the feature values x, and learns to form
context specific clusters. Indeed, 97% of the posterior distribution over z is absorbed by the four
cluster solution (or other solutions that are sufficiently similar in the sense discussed earlier). In the
process, the model infers that ? is small and generalizes in a context specific way (as per Figure 3).
Nevertheless, without changing any parameter values, the same model in the randomized condition
infers that there is no pattern to the context variable, which ends up being randomly scattered across
the clusters. For this condition 57% of the posterior mass is approximately equivalent to the three
cluster solution. As a result, the model infers that ? is large, and generalizes in the context general
fashion. In short, the model captures human performance quite effectively.
When considering the implications of Figure 4, it is clear that the model captures the critical feature of the experiment: the ability to learn when to make context specific generalizations and when
not to. The distributions over ? are very different as a function of condition, indicating that the
model learns appropriately. What is less clear is the extent to which the model would be expected
to produce the correct pattern of individual differences. Inspection of Figure 4 reveals that in the
7
randomized context condition the posterior distribution over ? does not move all that far above the
prior median of 3.7 (dashed line) which by construction is intended to be a fairly neutral value,
whereas in the systematic condition nearly the entire distribution lies below this value. In other
words, the systematic condition produces more learning about ?. If one were to suppose that people
had no inherent prior biases to prefer to generalize one way or the other, it should follow that the
less informative condition (i.e., random context) should reveal more individual differences. Empirically, the reverse is true: in the less informative condition, all participants generalize in a context
general fashion; whereas in the more informative condition (i.e., systematic context) some but not
all participants learn to generalize more narrowly. This does not pose any inherent difficulty for the
model, but it does suggest that the ?unbiased? prior chosen for this demonstration is not quite right:
people do appear to have strong prior biases to prefer context general representations. Fortunately, a
cursory investigation revealed that altering the prior over ? moves the posteriors in a sensible fashion
while still keeping the two distributions distinct.
5 Discussion
The hierarchical Bayesian model outlined in this paper explains how human conceptual learning
can be context general in some situations, and context sensitive in others. It captures the critical
?knowledge partitioning? effect [2?4, 6] and does so without altering the core components of the
RMC [7] and its extensions [15, 16, 18, 20]. This success leads to an interesting question: why does
ALCOVE [21] not account for knowledge partitioning (see [4])? Arguably, ALCOVE has been
the dominant theory for learned selective attention for almost 20 years, and its attentional learning
mechanisms bear a striking similarity to the hierarchical Bayesian learning idea used in this paper
and elsewhere [15?19], as well as to statistical methods for automatic relevance determination in
Bayesian neural networks [22]. On the basis of these similarities, one might expect similar behavior
from ALCOVE and the context sensitive RMC. Yet this is not the case. The answer to this lies in
the details of why one learns dimensional biases. In ALCOVE, as in many connectionist models, the
dimensional biases are chosen to optimize the ability to predict the category label. Since the context
variable is not correlated with the label in these experiments (by construction), ALCOVE learns to
ignore the context variable in all cases. The approach taken by the RMC is qualitatively different:
it looks for clusters of items where the label, the context and the feature values are all similar to
one another. Knowledge partitioning experiments more or less require that such clusters exist, so
the RMC can learn that the context variable is not distributed randomly. In short, ALCOVE treats
context as important only if it can predict the label; the RMC treats the context as important if it
helps the learner infer the structure of the world.
Looking beyond artificial learning tasks, learning the situations in which knowledge should be applied is an important task for an intelligent agent operating in a complex world. Moreover, hierarchical Bayesian models provide a natural formalism for describing how human learners are able to
do so. Viewed in this light, the fact that it is possible for people to hold contradictory knowledge
in different ?parcels? should be viewed as a special case of the general problem of learning the set
of relevant contexts. Consider, for instance, the example in which fire fighters make different judgments about the same fire depending on whether it is called a back-burn or a to-be-controlled fire
[2]. If fire fighters observe a very different distribution of fires in the context of back-burns than
in the context of to-be-controlled fires, then it should be no surprise that they acquire two distinct
theories of ?fires?, each bound to a different context. Although this particular example is a case in
which the learned context specificity is incorrect, it takes only a minor shift to make the behavior
correct. While the behavior of fires does not depend on the reason why they were lit, it does depend
on what combustibles they are fed. If the distinction were between fires observed in a forest context and fires observed in a tyre yard, context specific category representations suddenly seem very
sensible. Similarly, social categories such as ?polite behavior? are necessarily highly context dependent, so it makes sense that the learner would construct different rules for different contexts. If the
world presents the learner with observations that vary systematically across contexts, partitioning
knowledge by context would seem to be a rational learning strategy.
Acknowledgements
This research was supported by an Australian Research Fellowship (ARC grant DP-0773794).
8
References
[1] W. G. Chase and H. A. Simon. Perception in chess. Cognitive Psychology, 4:55?81, 1973.
[2] S. Lewandowsky and K. Kirsner. Knowledge partitioning: Context-dependent use of expertise.
Memory and Cognition, 28:295?305, 2000.
[3] L.-X. Yang and S. Lewandowsky. Context-gated knowledge partitioning in categorization.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 29:663?679, 2003.
[4] L.-X. Yang and S. Lewandowsky. Knowledge partitioning in categorization: Constraints on
exemplar models. Journal of Experimental Psychology: Learning, Memory, and Cognition,
30:1045?1064, 2004.
[5] M. L. Kalish, S. Lewandowsky, and J. K. Kruschke. Population of linear experts: Knowledge
partitioning in function learning. Psychological Review, 111:1072?1099, 2004.
[6] S. Lewandowsky, L. Roberts, and L.-X. Yang. Knowledge partitioning in category learning:
Boundary conditions. Memory and Cognition, 38:1676?1688, 2006.
[7] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98:
409?429, 1991.
?
[8] D. Aldous. Exchangeability and related topics. In Ecole
d??et?e de probabilit?es de Saint-Flour,
XIII-1983, pages 1?198. Springer, Berlin, 1985.
[9] R. N. Shepard. Integrality versus separability of stimulus dimensions: From an early convergence of evidence to a proposed theoretical basis. In J. R. Pomerantz and G. L. Lockhead,
editors, The Perception of Structure: Essays in Honor of Wendell R. Garner, pages 53?71.
American Psychological Association, Washington, DC, 1991.
[10] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman and
Hall, Boca Raton, 2nd edition, 2004.
[11] C. E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric
problems. Annals of Statistics, 2:1152?1174, 1974.
[12] L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2:193?218, 1985.
[13] L. Valiant. A theory of the learnable. Communications of the ACM, 27:1134?1142, 1984.
[14] M. A. Erickson and J. K. Kruschke. Rules and exemplars in category learning. Journal of
Experimental Psychology: General, 127:107?140, 1998.
[15] C. Kemp, A. Perfors, and J. B. Tenenbaum. Learning overhypotheses with hierarchical
Bayesian models. Developmental Science, 10:307?332, 2007.
[16] A. Perfors and J. B. Tenenbaum. Learning to learn categories. In N. Taatgen, H. van Rijn,
L. Schomaker, and J. Nerbonne, editors, Proceedings of the 31st Annual Conference of the
Cognitive Science Society, pages 136?141, Austin, TX, 2009. Cognitive Science Society.
[17] D. J. Navarro. From natural kinds to complex categories. In R. Sun and N. Miyake, editors,
Proceedings of the 28th Annual Conference of the Cognitive Science Society, pages 621?626,
Mahwah, NJ, 2006. Lawrence Erlbaum.
[18] T. L. Griffiths, K. R. Canini, A. N. Sanborn, and D. J. Navarro. Unifying rational models of
categorization via the hierarchical Dirichlet process. In D. S. McNamara and J. G. Trafton,
editors, Proceedings of the 29th Annual Conference of the Cognitive Science Society, pages
323?328, Austin, TX, 2007. Cognitive Science Society.
[19] K. Heller, A. N. Sanborn, and N. Chater. Hierarchical learning of dimensional biases in human categorization. In J. Lafferty and C. Williams, editors, Advances in Neural Information
Processing Systems 22, Cambridge, MA, 2009. MIT Press.
[20] A. N. Sanborn, T. L. Griffiths, and D. J. Navarro. Rational approximations to rational models:
Alternative algorithms for category learning. Psychological Review, in press.
[21] J. K. Kruschke. ALCOVE: An exemplar-based connectionist model of category learning. Psychological Review, 99:22?44, 1992.
[22] R. Neal. Bayesian learning for neural networks. Springer-Verlag, New York, 1996.
9
| 4139 |@word version:2 nd:1 essay:1 schomaker:1 seek:1 covariance:5 thereby:1 accommodate:1 initial:1 configuration:1 selecting:1 daniel:2 ecole:1 contextual:2 comparing:1 yet:1 must:1 partition:10 informative:3 shape:1 plot:1 cue:1 item:21 inspection:2 xk:3 ith:7 cursory:1 short:2 core:1 provides:2 readability:1 successive:1 location:2 firstly:1 height:1 along:2 beta:4 incorrect:1 consists:1 inside:1 introduce:2 manner:4 indeed:2 expected:4 behavior:5 inspired:1 considering:1 deem:1 moreover:1 panel:9 mass:1 what:7 kind:1 lockhead:1 nj:1 every:1 exactly:2 k2:2 partitioning:21 control:1 grant:1 appear:1 organize:2 producing:1 arguably:1 positive:1 before:1 treat:3 tends:1 approximately:3 black:1 burn:4 might:2 au:1 taatgen:1 limited:1 collapse:1 range:2 definite:1 prevalence:1 procedure:1 probabilit:1 empirical:3 word:1 integrating:2 regular:1 specificity:8 radial:1 suggest:2 griffith:2 cannot:1 onto:1 close:1 gelman:1 context:129 applying:1 collapsed:1 influence:2 optimize:1 equivalent:1 missing:2 straightforward:3 regardless:1 attention:2 kruschke:3 williams:1 focused:1 miyake:1 rule:6 his:1 population:1 handle:1 analogous:1 annals:1 construction:2 play:2 suppose:1 exact:1 caption:1 agreement:1 associate:1 observed:6 role:2 module:5 capture:6 boca:1 region:1 ensures:1 sun:1 environment:1 govern:1 developmental:1 asked:2 arabie:1 trained:2 depend:2 division:1 learner:12 basis:2 joint:2 differently:1 represented:1 tx:2 distinct:3 describe:1 perfors:2 artificial:3 aggregate:1 outside:1 quite:3 richer:1 larger:3 plausible:1 valued:1 lag:1 solve:1 ability:2 statistic:1 itself:2 kalish:1 obviously:1 chase:1 interaction:1 relevant:1 convergence:1 cluster:52 extending:1 produce:7 categorization:7 converges:1 object:2 help:1 depending:4 illustrate:1 pose:3 exemplar:5 school:1 minor:2 sa:1 strong:4 auxiliary:2 implies:2 revers:1 quantify:1 come:1 concentrate:1 safe:1 australian:1 closely:1 correct:2 redrawn:1 human:15 australia:1 xid:1 explains:1 require:2 fix:1 generalization:11 investigation:2 secondly:1 adjusted:3 extension:5 hold:1 around:1 considered:1 sufficiently:1 normal:2 exp:2 hall:1 lawrence:1 cognition:4 predict:3 vary:4 adopt:1 early:1 resample:1 label:19 sensitive:13 individually:1 occupying:1 alcove:7 reflects:1 mit:1 clearly:1 atrium:5 gaussian:1 always:1 aim:1 modified:1 rather:1 pn:1 yard:1 varying:3 exchangeability:1 chater:1 chessboard:1 bernoulli:3 contrast:1 sense:5 inference:4 dependent:2 typically:1 entire:1 her:1 selective:1 classification:1 denoted:1 priori:1 special:1 fairly:1 construct:2 never:2 having:1 washington:1 chapman:1 identical:1 broad:1 look:1 lit:1 nearly:1 others:1 stimulus:10 intelligent:2 inherent:2 few:1 connectionist:2 develops:1 randomly:5 xiii:1 gamma:1 tightly:1 lewandowsky:5 individual:3 phase:1 intended:1 fire:12 freedom:1 organization:1 interest:1 acceptance:1 possibility:2 highly:1 flour:1 mixture:1 light:2 hubert:1 bundle:2 implication:1 integral:1 partial:1 old:1 loosely:1 initialized:1 circle:3 plotted:2 theoretical:1 psychological:6 instance:2 formalism:1 classify:1 earlier:2 altering:2 zn:3 assignment:6 neutral:2 mcnamara:1 erlbaum:1 answer:2 varies:3 st:1 explores:1 randomized:5 systematic:10 xi1:1 together:1 continuously:1 quickly:1 squared:1 central:1 wishart:1 cognitive:6 expert:2 american:1 account:3 de:2 matter:1 unsatisfying:2 idealized:3 piece:1 later:2 performed:1 view:1 responsibility:1 systematicity:1 linked:3 observing:1 start:1 participant:13 simon:1 ass:1 square:4 variance:3 characteristic:3 yield:1 correspond:2 judgment:1 generalize:10 modelled:1 bayesian:13 wendell:1 garner:1 critically:1 produced:2 expertise:2 drive:1 classified:1 frequency:1 associated:3 rational:5 knowledge:37 color:1 infers:3 organized:2 actually:1 back:3 exceeded:1 supervised:1 follow:1 rand:3 formulation:1 anderson:2 crp:1 until:1 correlation:1 hand:1 reveal:2 effect:7 contain:1 true:2 unbiased:1 hence:1 assigned:8 symmetric:1 neal:1 illustrated:1 white:1 parcel:1 game:1 during:2 acquires:1 noted:1 outline:2 superior:1 permuted:1 multinomial:1 empirically:1 operationalized:1 insensitive:4 thirdly:1 discussed:3 shepard:1 association:1 refer:1 cambridge:1 gibbs:2 automatic:1 outlined:1 similarly:2 had:1 longer:1 similarity:3 operating:1 dominant:2 multivariate:2 posterior:12 perspective:1 aldous:1 irrelevant:1 forcing:1 reverse:1 verlag:1 honor:1 success:1 fortunately:1 somewhat:1 determine:2 dashed:2 multiple:2 full:1 desirable:1 infer:3 determination:1 cross:1 fighter:2 equally:1 coded:2 controlled:3 prediction:3 basic:1 psychologically:1 iteration:3 adopting:1 background:1 whereas:4 fellowship:1 median:3 appropriately:1 navarro:5 induced:1 suspect:1 thing:1 lafferty:1 seem:2 yang:3 revealed:2 insofar:1 switch:1 independence:1 fit:2 psychology:5 zi:19 restaurant:1 restrict:1 architecture:1 carlin:1 idea:1 shift:1 narrowly:1 whether:5 expression:2 york:1 useful:2 clear:4 listed:1 nonparametric:1 dark:1 tenenbaum:2 category:31 percentage:1 exist:1 notice:1 estimated:2 track:1 correctly:1 per:1 broadly:2 discrete:1 dominance:1 four:6 nevertheless:1 drawn:1 changing:1 verified:1 integrality:1 rmc:17 rectangle:3 sum:2 year:1 run:1 master:1 striking:1 extends:1 place:1 reasonable:1 decide:2 almost:1 parsimonious:2 draw:1 prefer:3 entirely:2 bound:2 annual:3 occur:1 constraint:1 sake:1 extremely:1 separable:2 conjugate:1 describes:1 across:7 smaller:1 suppressed:1 separability:1 partitioned:1 appealing:2 metropolis:2 making:1 happens:2 chess:2 explained:1 taken:5 equation:4 describing:2 count:4 mechanism:1 mind:1 fed:1 end:2 generalizes:4 apply:1 observe:1 hierarchical:13 appropriate:3 differentiated:2 alternative:1 original:2 denotes:5 dirichlet:3 include:1 ensure:2 clustering:3 saint:1 unifying:1 restrictive:1 chinese:1 society:5 suddenly:1 move:2 question:3 added:1 looked:1 strategy:2 dependence:3 leaner:1 diagonal:1 erickson:1 kth:2 dp:1 sanborn:3 separate:1 mapped:1 attentional:1 berlin:1 sensible:4 topic:1 extent:6 kemp:1 reason:1 index:3 demonstration:1 acquire:3 robert:1 design:1 stern:1 gated:1 allowing:1 upper:1 observation:17 arc:1 displayed:1 canini:1 situation:4 defining:1 extended:1 looking:1 communication:1 dc:1 varied:1 arbitrary:1 inv:1 raton:1 introduced:1 pair:1 required:1 specified:1 z1:1 learned:7 distinction:1 able:3 beyond:1 proceeds:1 usually:2 pattern:7 below:2 perception:2 appeared:2 challenge:1 including:2 memory:5 explanation:3 overlap:1 critical:3 treated:2 force:1 natural:3 difficulty:1 scheme:3 imply:1 deemed:3 text:1 prior:16 review:4 acknowledgement:1 heller:1 relative:1 expect:1 bear:1 rijn:1 interesting:2 versus:2 degree:1 agent:1 sufficient:1 consistent:1 rubin:1 editor:5 systematically:3 austin:2 elsewhere:1 accounted:1 supported:1 free:2 keeping:1 bias:8 side:1 allow:2 pomerantz:1 distributed:2 van:1 feedback:1 dimension:5 calculated:2 world:4 boundary:1 made:7 qualitatively:1 replicated:1 adaptive:1 far:1 social:1 approximate:3 ignore:1 preferred:1 sequentially:1 decides:1 reveals:1 conceptual:4 summing:1 xi:10 continuous:1 sk:2 why:3 nature:1 learn:8 transfer:5 tyre:1 forest:1 complex:2 necessarily:1 main:2 edition:1 mahwah:1 allowed:1 repeated:1 scattered:1 fashion:4 explicit:1 exponential:2 lie:2 tied:2 learns:9 specific:11 pac:1 learnable:1 normalizing:1 grouping:1 evidence:1 effectively:1 valiant:1 ci:5 labelling:1 nk:21 overhypotheses:1 surprise:1 generalizing:1 logarithmic:1 antoniak:1 likely:3 explore:1 absorbed:1 expressed:1 applies:1 acquiring:1 ch:1 springer:2 acm:1 ma:1 conditional:2 viewed:2 change:2 hard:2 typical:3 except:1 sampler:5 contradictory:2 called:2 tendency:1 experimental:3 e:1 meaningful:1 rarely:1 indicating:1 people:8 adelaide:3 relevance:1 phenomenon:1 correlated:1 |
3,467 | 414 | Real-time autonomous robot navigation using
VLSI neural networks
Lionel Tarassenko Michael Brownlow Gillian Marshall?
Department of Engineering Science
Oxford University, Oxford, OXl 3PJ, UK
Jon Tombs
Alan Murray
Department of Electrical Engineering
Edinburgh University, Edinburgh, EH9 3JL, UK
Abstract
We describe a real time robot navigation system based on three VLSI
neural network modules. These are a resistive grid for path planning, a
nearest-neighbour classifier for localization using range data from a timeof-flight infra-red sensor and a sensory-motor associative network for dynamic obstacle avoidance .
1
INTRODUCTION
There have been very few demonstrations ofthe application ofVLSI neural networks
to real world problems. Yet there are many signal processing, pattern recognition
or optimization problems where a large number of competing hypotheses need to
be explored in parallel, most often in real time. The massive parallelism of VLSI
neural network devices, with one multiplier circuit per synapse, is ideally suited to
such problems. In this paper, we present preliminary results from our design for a
real time robot navigation system based on VLSI neural network modules. This is a
? Also: RSRE, Great Malvern, Worcester, WR14 3PS
422
Real-time Autonomous Robot Navigation Using VLSI Neural Networks
real world problem which has not been fully solved by traditional AI methods; even
when partial solutions have been proposed and implemented, these have required
vast computational resources, usually remote from the robot and linked to it via an
umbilical cord.
2
OVERVIEW
The aim of our work is to develop an autonomous vehicle capable of real-time
navigation, including obstacle avoidance, in a known indoor environment. The
obstacles may be permanent (static) or unexpected and dynamic (for example,
in an automated factory environment, the walls and machines are permanent but
people, other moving vehicles and packages are not.) There are three neural network
modules at the heart of our navigation system: a localization module (to determine,
at any time, the robot's position within the environment), an obstacle detection
module and a path planning module (to compute a path to the goal which avoids
obstacles). These modules perform low-level processing in real time which can then
be decoupled from higher level processing to be carried out by a simple controller.
It is our view that such a hybrid system is the best way to realise the computational
potential of artificial neural networks for solving a real world problem such as this
without compromising overall system performance.
A short description of each module is now given. In each case, the general principles
are first outlined and, where applicable, the results of our preliminary work are then
reported.
3
PATH PLANNING
The use ofresistive grids for parallel analog computation was first suggested by Horn
in the mid-seventies (Horn, 1974) and the idea has since been exploited by Mead and
co-workers, for example in a silicon retina (Mead and Mahowald, 1988). Although
these resistive grids cannot be said to be neural networks in the conventional sense,
they also perform parallel analog computation and they have the same advantages,
in terms of speed and fault-tolerance, as any hardware realisation of neural networks.
We have taken the resistive grid concept and applied it to the path planning problem, here taken to be the computation of an obstacle-avoiding path, in a structured
environment, from the robot's initial (or present) position (P) to its goal (G). In our
approach, the robot's working domain is discretized and mapped onto a resistive
grid of hexagonal or rectangular cells - see Figure 1 which shows the test environment for Autonomous Guided Vehicles (AGV's) in the Oxford Robotics Laboratory.
Each resistor in the grid has a value of flo, unless it is part of a region of the grid
corresponding to an obstacle, in which case its resistance is infinite (Roo).
The principle of the method is perhaps best understood by considering a continuous
analog of the resistive grid (for example, a sheet of material of uniform resistivity in
which holes have been cut to represent the obstacles). The current streamlines resulting from the application of an external source between P and G skirt around the
obstacles; if we follow one of these streamlines from P to G, we will obtain a guaranteed collision-free path since current cannot flow into the obstacles (Tarassenko and
423
424
Tarassenko, Brownlow, Marshall, Tombs, and Murray
Blake, 1991). For simple cases such as circularly symmetric conductivity distributions in 2D, Laplace's equation can be solved in order to calculate the value of the
potential V at every point within the workspace. Following a current streamline is
then simply a matter of performing gradient descent in V.
Figure 1: The Oxford test environment for AGV's mapped out as a hexagonal
resistive grid. The resistors corresponding to the four pillars in the middle are open
circuits. Note that the pillars are enlarged in their grid representation in order to
take into account the mobile robot's finite size.
It is not possible, however, to solve Laplace's equation analytically for realistic environments. With the resistive grid, the problem is discretized and mapped onto a
hardware representation which can be implemented in VLSI. As soon as an external
source of power is connected between P and G, the resistive network settles into
the state of least power dissipation and the node voltages can be read out (hardware computation of Kirchhoff's equations). The path from P to G is computed
incrementally from local voltage measurements: for each node, the next move is
identified by measuring the voltage drop ~ Vn between that node and each of its
nearest neighbours (n = 6 for a hexagonal grid) and then selecting the node corresponding to (~Vn)max. This is illustrated by the example of a robot in a maze
(Figure 2). As above, the resistors shown shaded are open circuits whilst all other
resistors are set to be equal to Ro. The robot is initially placed at the centre of the
maze (P) and a path has to be found to the goal in the top left-hand corner (G). The
solid line shows the path resulting from a single application of the voltage between
P and G. The dotted line shows the (optimal) path computed by re-applying the
Real-time Autonomous Robot Navigation Using VLSI Neural Networks
voltage at every node as the robot moves towards the goal. As already indicated,
this is actually how we intend to use the resistive grid planner in practice, since
this approach also allows us to re-compute the robot's path whenever unexpected
obstacles appear in the environment (see Section 5) .
..:~:::.~~::~.::~.~: ::~::~~:::~:::.~.::~::~.~::.::~.::::~~::~::~;;':X
~---;~~~~-*" ~I.: :x.........'?){ .... ) <?.... )<. . ?::x
x ..........
?x.
..'......
)E ...... ? ?~)( .... ???) (?????? ..
,?????.... ?~?? ...... ?x.. ??.. ? ?: ~: .. ???? ? :~::???? ??:~: .. ?? .. ?:x
..>ot~?? ....?:'X':.. ?.... ?';I(...... ?:~~~~'"*-*-'*?.? . . .?.:X: .. ..??
x:?
* .........,>0:: .
x;~?.~:~:~?:::?~:~?:~: ?. ~~: ~?:::~:::~::. . . :~:: .?~:.: ~.: .~:::~:::.~::.:~::~~~ ;~:.: .:~:.: ~: :~:.: ~:::~.:x
Figure 2: Path from middle of maze (P) to top left-hand corner (G)
3.1
VLSI IMPLEMENTATION
The VLSI implementation of the resistive grid method will allow us to solve the path
planning for complex environments in real time. MOS switches are ideal implementations of the binary resistors in the grid. Each transistor can be programmed to
be either open (Roo) or closed (Ro) from a RAM cell connected to its gate. With
the incremental computation of the path described above , the selection of the next
move is a matter of identifying the largest of six voltages. Of course, the nearest
neighbour voltages and that of the P node could be read out through an AID converter and the decision made off-chip. We favour a full hardware solution instead,
whereby the maximum voltage difference is directly identified on-chip.
4
LOCALIZATION
The autonomous robot should at any time be able to work out its position in
the workspace so that the path to the goal can be updated if required. The grid
representation of the environment used for the path planner can also be employed
425
426
Tarassenko, Brownlow, Marshall, Tombs, and Murray
for localization purposes, in which case localization becomes, in the first instance,
a matter of identifying the nearest node in the grid at any time during navigation.
This task can be performed by harnessing the pattern recognition capabilities of
neural networks. The room environment is learnt by recording a 3600 range scan
at every node during a training phase prior to navigation. During navigation, the
nearest node is identified using a minimum-distance classifier implemented on a
single-layer neural network working on dense input data (one range value every 30 ,
say). In order to solve the localization problem in real-time, we have designed a timeof-flight optical rangefinder, which uses near infra-red light, amplitude-modulated
at a frequency of just above 5 MHz, together with a heterodyne mixing technique.
Our design is capable of resolving phase shifts in the received light signal of the
order of 0.10 over a 50 dB dynamic range.
The rotating optical scanner gives a complete 360 0 scan approximately every second
during navigation. The minimum-distance classifier is used to compare this scan x
with the k patterns Uj recorded at each node during training. If we use a Euclidean
metric for the comparison, this is equivalent to identifying the pattern Uj for which:
(1)
is a minimum. The first term in the above equation is the same for all i and can be
ignored. We can therefore write:
= - '12 (- 2wT x +
= WiT +
where gj(x) is a linear discriminant function, Wi = Uj and
= -~u;,
gj
( x)
j
2
Uj)
X
WjQ
(2)
WjQ
Thus each
vector is one of the learnt patterns Ui and the discriminant gi(X) matches the
input x with Uj, point by point. If we let W j
{Iij} and x = {Vj} and assume
that there are n range values in each scan, then we can write:
Wj
=
j=n
gj(x)
= E Iij Vj +
WiO
(3)
j=l
Thus the synaptic weights are an exact copy of the patterns recorded at each grid
point during learning and the neurons can be thought of as processors which compute distances to those patterns. During navigation, the nearest node is identified
with a network of k neurons evaluating k discriminant functions in parallel, followed
by a ''winner-take-all'' network to pick the maximum gj(x). This is the well-known
implementation of the nearest-neighbour classifier on a neural network architecture.
Since the ui's are analog input vectors, then the synaptic weights Iij will also be
analog quantities and this leads to a very efficient use of the pulse-stream analog
VLSI technology which we have recently developed for the implementation of neural
networks (Murray et ai, 1990).
With pulse-stream arithmetic, analog computation is performed under digital control. The neural states are represented by pulse rates and synaptic multiplication is achieved by pulse width modulation. This allows very compact, fully-
Real-time Autonomous Robot Navigation Using VLSI Neural Networks
programmable, synapse circuits to be designed (3 or 4 transistors per synapse).
We have already applied one set of our working chips to the nearest-neighbour classification task described in this Section. They were evaluated on a 24-node test
environment and full results have been reported elsewhere (Brownlow, Tarassenko
and Murray, 1990). It was found that the E Iij Vi scalar products evaluated by our
VLSI chips on this test problem were always within 1.2% of those computed on a
SUN 3/80 workstation.
5
OBSTACLE DETECTION/AVOIDANCE
A more appropriate name for this module may be that of local navigation. The
module will rely on optical flow information derived from a number of fixed optical
sensors mounted on the robot platform. Each sensor will include a pulsed light
source to illuminate the scene locally and the light reflected from nearby objects
will be focussed onto a pair of gratings at right angles to each other, before being
detected by a photodiode array. From the time derivatives of the received signals,
it is possible to compute the relative velocities of nearby objects such as moving
obstacles. We plan to use previous work on structure from motion to pre-process
these velocity vectors and derive from them appropriate feature vectors to be used
as inputs to a low-level neural network for motor control (see Figure 3 below).
sensors~ intended path,...."'"
""
"
"q'
un
Measurement
flow from
of optic
sensors
w
Velocity signal of
approaching objects.
Low level network
Direct motor
control
Figure 3: Sensory-motor associative network for obstacle avoidance
427
428
Tarassenko, Brownlow, Marshall, Tombs, and Murray
The obstacle avoidance network will be taught to associate appropriate motor behaviours with different types of sensory input data, for example the taking of the
correct evasive action when a moving object is approaching the robot from a particular direction. This module will therefore be responsible for path adjustment in
response to dynamic obstacles (with a bandwidth of around 100 Hz), but the path
planner of Section 3 will continue to deal with path reconfiguration at a much lower
data rate (1 Hz), once the dynamic obstacle has been avoided. Our work on this
module has, so far, been mainly concerned with the design of the input sensors and
associated electronics.
6
CONCLUSION
We have implemented the path planning and localization modules described in this
paper on a SUN 4 workstation and used them to control a mobile robot platform
via a radio link. This capability was demonstrated at the NIPS'90 Conference with
a videotape recording of our mobile robot navigating around static obstacles in
a laboratory environment, using real-time infra-red data for localization. It was
possible to run the path planner in near real-time in simulation because no resistor
value need be changed in a static environment; in order to achieve real-time path
planning in a dynamic environment, however, the hardware solution of Section 3
will be mandatory. Our aim remains the implementation of all 3 modules in VLSI
in order to demonstrate a fully autonomous real-time navigation system with all
the sensors and hardware mounted on the robot platform.
Acknowledgements
We gratefully acknowledge the financial support of UK Science and Engineering
Research Council and of the EEC (ESPRIT BRA). We have benefitted greatly
from the help and advice of members of the Robotics Research Group, most notably
Martin Adams, Gabriel Hamid and Jake Reynolds.
References
M.J. Brownlow, L. Tarassenko & A.F. Murray. (1990) Analogue computation using
VLSI neural network devices. Electronics Letters, 26(16):1297-1299.
B.K.P. Horn. (1974) Determining lightness from an image. Computational Graphics
fj Image Processing, 3:277-299.
C.A. Mead & M.A. Mahowald. (1988) A silicon model of early visual processing.
Neural Networks, 1(1 ):91-97.
A.F. Murray, M.J. Brownlow, L. Tarassenko, A. Hamilton, I.S. Han & H.M. Reekie.
(1990) Pulse-Firing Neural Chips for Hundreds of Neurons. In D.S. Touretzky (ed.),
Advances in Neural Information Processing Systems 2, 785-792. San Mateo, CA:
Morgan Kaufmann.
L. Tarassenko & A. Blake. (1991). Analogue computation of collision-free paths. To
be published in: Proceedings of 1991 IEEE Int. Conf. on Robotics fj Automation,
Sacramento, CA:
| 414 |@word middle:2 pillar:2 open:3 pulse:5 simulation:1 pick:1 solid:1 electronics:2 initial:1 selecting:1 reynolds:1 current:3 yet:1 realistic:1 motor:5 drop:1 designed:2 device:2 short:1 node:12 direct:1 resistive:10 notably:1 planning:7 discretized:2 considering:1 becomes:1 circuit:4 developed:1 whilst:1 every:5 ro:2 classifier:4 esprit:1 uk:3 control:4 conductivity:1 appear:1 hamilton:1 before:1 engineering:3 understood:1 local:2 oxford:4 mead:3 path:25 modulation:1 approximately:1 firing:1 mateo:1 shaded:1 co:1 programmed:1 range:5 horn:3 responsible:1 practice:1 evasive:1 thought:1 pre:1 cannot:2 onto:3 selection:1 sheet:1 tomb:4 applying:1 conventional:1 equivalent:1 demonstrated:1 rectangular:1 wit:1 identifying:3 sacramento:1 avoidance:5 array:1 financial:1 autonomous:8 laplace:2 updated:1 massive:1 exact:1 us:1 hypothesis:1 associate:1 velocity:3 recognition:2 photodiode:1 cut:1 tarassenko:9 module:14 electrical:1 solved:2 calculate:1 cord:1 region:1 connected:2 wj:1 sun:2 remote:1 environment:15 ui:2 ideally:1 dynamic:6 solving:1 localization:8 kirchhoff:1 chip:5 represented:1 describe:1 artificial:1 detected:1 harnessing:1 heterodyne:1 solve:3 say:1 roo:2 gi:1 associative:2 advantage:1 transistor:2 product:1 mixing:1 achieve:1 description:1 flo:1 lionel:1 p:1 incremental:1 adam:1 object:4 help:1 derive:1 develop:1 nearest:8 received:2 grating:1 implemented:4 streamline:1 direction:1 guided:1 correct:1 compromising:1 settle:1 material:1 behaviour:1 wall:1 preliminary:2 hamid:1 scanner:1 around:3 blake:2 great:1 mo:1 early:1 purpose:1 applicable:1 radio:1 council:1 largest:1 sensor:7 always:1 aim:2 mobile:3 voltage:8 derived:1 mainly:1 greatly:1 sense:1 initially:1 vlsi:14 overall:1 classification:1 plan:1 platform:3 equal:1 once:1 seventy:1 jon:1 infra:3 realisation:1 few:1 retina:1 neighbour:5 phase:2 intended:1 detection:2 navigation:15 light:4 capable:2 partial:1 worker:1 decoupled:1 unless:1 euclidean:1 rotating:1 re:2 skirt:1 instance:1 obstacle:18 marshall:4 timeof:2 mhz:1 measuring:1 mahowald:2 wr14:1 uniform:1 hundred:1 graphic:1 reported:2 eec:1 learnt:2 workspace:2 off:1 michael:1 together:1 recorded:2 external:2 corner:2 conf:1 derivative:1 account:1 potential:2 automation:1 int:1 matter:3 permanent:2 vi:1 stream:2 vehicle:3 view:1 performed:2 closed:1 linked:1 red:3 parallel:4 capability:2 kaufmann:1 ofthe:1 brownlow:7 published:1 processor:1 resistivity:1 touretzky:1 whenever:1 synaptic:3 ed:1 realise:1 frequency:1 associated:1 static:3 workstation:2 amplitude:1 actually:1 higher:1 follow:1 reflected:1 response:1 synapse:3 evaluated:2 just:1 flight:2 working:3 hand:2 incrementally:1 indicated:1 perhaps:1 name:1 concept:1 multiplier:1 analytically:1 read:2 symmetric:1 laboratory:2 illustrated:1 deal:1 during:7 width:1 whereby:1 complete:1 demonstrate:1 dissipation:1 motion:1 fj:2 image:2 recently:1 overview:1 winner:1 jl:1 analog:7 silicon:2 measurement:2 ai:2 grid:18 outlined:1 centre:1 gratefully:1 moving:3 robot:21 han:1 gj:4 pulsed:1 mandatory:1 binary:1 continue:1 fault:1 exploited:1 morgan:1 minimum:3 employed:1 bra:1 determine:1 signal:4 arithmetic:1 resolving:1 full:2 alan:1 match:1 controller:1 metric:1 represent:1 robotics:3 cell:2 achieved:1 oxl:1 source:3 ot:1 recording:2 hz:2 db:1 member:1 flow:3 near:2 ideal:1 concerned:1 automated:1 switch:1 architecture:1 competing:1 identified:4 converter:1 approaching:2 idea:1 bandwidth:1 shift:1 favour:1 six:1 resistance:1 rsre:1 action:1 programmable:1 ignored:1 gabriel:1 collision:2 mid:1 locally:1 hardware:6 dotted:1 per:2 write:2 taught:1 group:1 four:1 pj:1 rangefinder:1 vast:1 ram:1 run:1 package:1 angle:1 letter:1 planner:4 vn:2 decision:1 eh9:1 layer:1 guaranteed:1 followed:1 optic:1 scene:1 nearby:2 speed:1 performing:1 hexagonal:3 optical:4 martin:1 department:2 structured:1 wi:1 heart:1 taken:2 resource:1 equation:4 remains:1 appropriate:3 gate:1 top:2 include:1 murray:8 uj:5 jake:1 move:3 intend:1 already:2 quantity:1 traditional:1 illuminate:1 said:1 gradient:1 navigating:1 distance:3 link:1 mapped:3 discriminant:3 demonstration:1 design:3 implementation:6 perform:2 neuron:3 finite:1 acknowledge:1 descent:1 pair:1 required:2 nip:1 able:1 suggested:1 parallelism:1 pattern:7 usually:1 indoor:1 below:1 including:1 max:1 analogue:2 power:2 hybrid:1 rely:1 technology:1 lightness:1 carried:1 prior:1 acknowledgement:1 multiplication:1 determining:1 relative:1 fully:3 mounted:2 digital:1 reekie:1 principle:2 course:1 elsewhere:1 changed:1 placed:1 free:2 soon:1 copy:1 allow:1 focussed:1 taking:1 edinburgh:2 tolerance:1 world:3 avoids:1 maze:3 evaluating:1 sensory:3 made:1 san:1 avoided:1 far:1 compact:1 continuous:1 un:1 ca:2 complex:1 domain:1 vj:2 dense:1 enlarged:1 advice:1 malvern:1 streamlines:2 aid:1 iij:4 position:3 resistor:6 factory:1 explored:1 circularly:1 hole:1 suited:1 simply:1 visual:1 unexpected:2 adjustment:1 scalar:1 worcester:1 goal:5 towards:1 room:1 infinite:1 wt:1 reconfiguration:1 wio:1 people:1 support:1 scan:4 modulated:1 gillian:1 benefitted:1 avoiding:1 |
3,468 | 4,140 | (RF)2 ? Random Forest Random Field
Nadia Payet and Sinisa Todorovic
School of Electrical Engineering and Computer Science
Oregon State University
[email protected], [email protected]
Abstract
We combine random forest (RF) and conditional random field (CRF) into a new
computational framework, called random forest random field (RF)2 . Inference
of (RF)2 uses the Swendsen-Wang cut algorithm, characterized by MetropolisHastings jumps. A jump from one state to another depends on the ratio of the
proposal distributions, and on the ratio of the posterior distributions of the two
states. Prior work typically resorts to a parametric estimation of these four distributions, and then computes their ratio. Our key idea is to instead directly estimate these ratios using RF. RF collects in leaf nodes of each decision tree the
class histograms of training examples. We use these class histograms for a nonparametric estimation of the distribution ratios. We derive the theoretical error
bounds of a two-class (RF)2 . (RF)2 is applied to a challenging task of multiclass
object recognition and segmentation over a random field of input image regions.
In our empirical evaluation, we use only the visual information provided by image
regions (e.g., color, texture, spatial layout), whereas the competing methods additionally use higher-level cues about the horizon location and 3D layout of surfaces
in the scene. Nevertheless, (RF)2 outperforms the state of the art on benchmark
datasets, in terms of accuracy and computation time.
1 Introduction
This paper presents a new computational framework, called random forest random field (RF)2 ,
which provides a principled way to jointly reason about multiple, statistically dependent random
variables and their attributes. We derive theoretical performance bounds of (RF)2 , and demonstrate
its utility on a challenging task of conjoint object recognition and segmentation.
Identifying subimage ownership among occurrences of distinct object classes in an image is a fundamental, and one of the most actively pursued problem in computer vision, machine learning, and
artificial intelligence [1?11]. The goal is to assign the label of one of multiple semantic classes to
each image pixel. Our approach builds on the following common recognition strategies: (i) Labels
of neighboring image parts are likely to be correlated ? one of the main principles of perceptual
organization; and (ii) Recognized objects dictate which other objects to expect in the scene, and
their scale and spatial configuration ? one of the main principles of context-driven recognition that
?binds? all object detections in a coherent scene interpretation. We formalize perceptual grouping
and context by a graphical model aimed at capturing statistical dependencies among random variables (i.e., labels or attributes) associated with different pixel neighborhoods. Thus, we derive a
unified framework for combined object recognition and segmentation, as a graph-structured prediction of all random variables in a single, consistent model of the scene.
The graphical model we use is Conditional Random Field (CRF) [12]?one of the most popular
models for structured inference over pixels [2, 3], patches [4, 5], or image regions [6?8], for object
recognition and segmentation. CRF defines a posterior distribution of hidden random
Q variables Y
1
(labels), given observed image features X, in a factored form: p(Y |X; ?)= Z(?)
c ?c (Yc , X; ?).
1
Each potential ?c is a function over a subset Yc ?Y , conditioned on X, and parameterized by ?.
The potentials are often defined as linear functions of parameters, ?c (Yc , X; ?)=? T ?c , where ?c
is the output of some detectors over observables X [2?4]. This means that p(Y |X; ?) is modeled
as a log-linear function, which is not adequate when the detector outputs do not provide a linear
separability of the classes. Learning ? is hard, because computation of the partition function Z(?)
is intractable for most graphs (except for chains
P and trees). Inference is typically posed as the joint
MAP assignment that minimizes the energy c ?c (Yc , X; ?), which is also intractable for general
graphs. The intractability of CRF learning and inference often motivates prior work to resort to
approximate algorithms, e.g., graph-cuts, and loopy belief propagation (LBP). The effect of these
approximations on the original semantics of CRF is poorly understood. For example, an approximate
inference stuck in a local maximum may not represent the intended consistent scene interpretation.
Motivation: Some of the aforementioned shortcomings can be addressed when CRF inference is
conducted using the Metropolis-Hastings (MH) algorithm. MH draws samples Y (t) from the CRF?s
posterior, p(Y |X), and thus generates a Markov chain in which state Y (t+1) depends only on the
previous state Y (t) . The jumps between the states are reversible, and governed by a proposal density
q(Y (t) ? Y (t+1) ). The proposal is accepted if the acceptance rate, ?, drawn from U (0, 1), satisfies
(t+1)
(t)
(t+1)
?Y
) p(Y
|X)
?< min{1, q(Y
}. MH provides strong theoretical guarantees of convergence
q(Y (t) ?Y (t+1) ) p(Y (t) |X)
to the globally optimal state. As can be seen, the entire inference process is regulated by ratios of
the proposal and posterior distributions. Consequently, the bottleneck of every CRF learning and
inference ? namely, computing the partition function Z ? is eliminated in MH.
Our key idea is to directly estimate the ratios of the proposal and posterior distributions, instead
of computing each individual distribution for conducting MH jumps. Previous work on MH for
CRFs usually commits to linear forms of the potential functions, and spends computational resources on estimating the four distributions: q(Y (t+1) ?Y (t) ), q(Y (t) ?Y (t+1) ), p(Y (t+1) |X)
and p(Y (t) |X). In contrast, our goal is to directly estimate the two ratios,
q(Y (t+1) ?Y (t) )
q(Y (t) ?Y (t+1) )
and
(t+1)
p(Y
|X)
,
p(Y (t) |X)
in a non-parametric manner, since the acceptance rate of MH jumps depends only on
these ratios. To this end, we use the random forests (RF) [13]. Given a training set of labeled examples, RF grows many decision trees. We view the trees as a way of discriminatively structuring
evidence about the class distributions in the training set. In particular, each leaf of each tree in RF
stores a histogram of the number of training examples from each class that reached that leaf. When
a new example is encountered, it is ?dropped? down each of the trees in the forest, until it reaches a
leaf in every tree. The class histograms stored in all these leaves can then be used as a robust estimate of the ratio of that example?s posterior distributions. This is related to recent work on Hough
forests for object detection and localization [14], where leaves collect information on locations and
sizes of bounding boxes of objects in training images. However, they use this evidence to predict
a spatial distribution of bounding boxes in a test image, whereas we use the evidence stored in tree
leaves to predict the distribution ratios. Evidence trees are also used in [15], but only as a first stage
of a stacked-classifier architecture which replaces the standard majority voting of RF.
RF is difficult to analyze [13, 16]. Regarding consistency of RF, it is known that their rate of convergence to the optimal Bayes? rule depends only on the number of informative variables. It is also
shown that RF that cuts down to pure leaves uses a weighted, layered, nearest neighbor rule [16].
We are not aware of any theoretical analysis of RF as an estimator of ratios of posterior distributions.
Contributions: We combine RF and CRF into a new, principled and elegant computational framework (RF)2 . Learning is efficiently conducted by RF which collects the class histograms of training
examples in leaf nodes of each decision tree. This evidence is then used for the non-parametric
estimation of the ratios of the proposal and posterior distributions, required by MH-based inference
of (RF)2 . We derive the theoretical error bounds of estimating distribution ratios by a two-class RF,
which is then used to derive the theoretical performance bounds of a two-class (RF)2 .
Paper Organization: Sections 2?4 specify the CRF model, its MH-based inference, and RF-based
learning. Sections 5?6 present our experimental evaluation, and theoretical analysis of (RF)2 .
2
2 CRF Model
We formulate multiclass object recognition and segmentation as the MAP inference of a CRF, defined over a set of multiscale image regions. Regions are used as image features, because they are
dimensionally matched with 2D object occurrences in the image, and thus facilitate modeling of various perceptual-organization and contextual cues (e.g., continuation, smoothness, containment, and
adjacency) that are often used in recognition [6?11]. Access to regions is provided by the state-ofthe-art, multiscale segmentation algorithm of [17], which detects and closes object (and object-part)
boundaries using the domain knowledge. Since the right scale at which objects occur is unknown,
we use all regions from all scales.
The extracted regions are organized in a graph, G = (V, E), with V and E are sets of nodes and
edges. The nodes i=1, . . . , N correspond to multiscale segments, and edges (i, j) ? E capture
their spatial relations. Each node i is characterized by a descriptor vector, xi , whose elements
describe photometric and geometric properties of the corresponding region (e.g., color, shape, filter
responses). A pair of regions can have one of the following relationships: (1) ascendent/descendent,
(2) touching, and (3) far. Since the segmentation algorithm of [17] is strictly hierarchical, region
i is descendent of region j, if i is fully embedded as subregion within ancestor j. Two regions i
and j touch if they share a boundary part. Finally, if i and j are not in the hierarchical and touch
relationships then they are declared as far. Edges connect all node pairs E = V ? V , |E| = N 2 .
Each edge (i, j) is associated with a tag, eij , indicating the relationship type between i and j.
CRF is defined as the graphical model over G. Let Y = {yi } denote all random variables associated
with the nodes, indicating the class label of the corresponding region, yi ? {0, 1, . . . , K}, where
K denotes the total number of object classes, and label 0 is reserved for the background class. Let
pi = p(yi |xi ) and pij = p(yi , yj |xi , xj , eij ) denote the posterior distributions over nodes and pairs
of nodes. Then, we define CRF as
Q
Q
Q
Q
(1)
p(Y |G) = i?V p(yi |xi ) (i,j)?E p(yi , yj |xi , xj , eij ) = i?V pi (i,j)?E pij .
Multi-coloring of CRF is defined as the joint MAP assignment Y ? = arg maxY p(Y |G). In the
following section, we explain how to conduct this inference.
3 CRF Inference
For CRF inference, we use the Swendsen-Wang cut algorithm (SW-cut), presented in [18]. SWcut iterates the Metropolis-Hastings (MH) reversible jumps through the following two steps. (1)
Graph clustering: SW-cut probabilistically samples connected components, CC?s, where each
CC represents a subset of nodes with the same color. This is done by probabilistically cutting
edges between all graph nodes that have the same color based on their posterior distributions
pij = p(yi , yj |xi , xj , eij ). (2) Graph relabeling: SW-cut randomly selects one of the CC?s obtained in step (1), and randomly flips the color of all nodes in that CC, and cuts their edges with the
rest of the graph nodes having that same color. In each iteration, SW-cut probabilistically decides
whether to accept the new coloring of the selected CC, or to keep the previous state. Unlike other
MCMC methods that consider one node at a time (e.g., Gibbs sampler), SW-cut operates on a number of nodes at once. Consequently, SW-cut converges faster and enables inference on relatively
large graphs. Below, we review steps (1) and (2) of SW-cut, for completeness.
In step (1), edges of G are probabilistically sampled. This re-connects all nodes into new connected
components CC. If two nodes i and j have different labels, they cannot be in the same CC, so
their edge remains intact. If i and j have the same label, their edge is probabilistically sampled
according to posterior distribution pij . If in the latter case edge (i, j) is not sampled, we say that
it has been probabilistically ?cut?. Step (1) results in a state A. In step (2), we choose at random
a connected component CC from step (1), and randomly reassign a new color to all nodes in that
CC. To separate the re-colored CC from the rest of the graph, we cut existing edges that connect
CC to the rest of the graph nodes with that same color. Step (2) results in a new state B. SW-cut
accepts state B if the acceptance rate is sufficiently large via a random thresholding. Let q(A ? B)
be the proposal probability for moving from state A to B, and let q(B ? A) denote the converse.
The acceptance rate, ?(A?B), of the move from A to B is defined as
q(B ? A)p(Y = B|G)
?(A ? B) = min 1,
.
(2)
q(A ? B)p(Y = A|G)
3
The computation complexity of each move is relatively low. The ratio
q(B?A)
q(A?B)
in (2) involves only
=B|G)
those edges that are ?cut? around CC in states A and B ? not all edges. Also, the ratio p(Y
p(Y =A|G)
accounts only for the recolored nodes in CC ? not the entire graph G, since all other probabilities
have not changed from state A to state B. Thus, from Eq. (1), the ratios of the proposal and posterior
distributions characterizing states A and B can be specified as
Q
B
Y pB
Y pB
p(Y = B|G)
q(B?A)
(i,j)?CutB (1?pij )
ij
i
,
and
= Q
=
?
.
(3)
A)
A
A
q(A?B)
p(Y
=
A|G)
(1?p
p
p
ij
ij
(i,j)?CutA
i?CC i
j?N (i)
where CutA and CutB denote the sets of ?cut? edges in states A and B, and N (i) is the set of
neighbors of node i, N (i) = {j : j ? V, (i, j) ? E}.
As shown in [18], SW-cut is relatively insensitive to different initializations. In our experiments, we
initialize all nodes in the CRF with label 0. Next, we show how to compute the ratios in Eq. (3).
4 Learning
RF can be used for estimating the ratios of the proposal and posterior distributions, given by Eq. (3),
since RF provides near Bayesian optimal decisions, as theoretically shown by Breiman [13]. In the
following, we describe how to build RF, and use it for computing the ratios in Eq. (3).
Our training data represent a set of M labeled regions. If region i falls within the bounding box of
an object in class y ? {1, 2, . . . , K}, it receives label y. If i covers a number of bounding boxes
of different classes then i is added to the training set multiple times to account for all distinct class
labels it covers. Each region i is characterized by a d-dimensional descriptor vector, xi ? Rd ,
which encodes the photometric and geometric properties of i. The training dataset {(xi , yi ) : i =
1, . . . , M } is used to learn an ensemble of T decision trees representing RF.
In particular, each training sample is passed through every decision tree from the ensemble until it
reaches a leaf node. Each leaf l records a class histogram, ?l = {?l (y) : y = 1, . . . , K}, where
?l (y) counts the number of training examples belonging to class y that reached l. The total number
of training examples in l is then k?l k. Also, for each pair of leaves (l, l? ), we record a two-class
histogram, ?ll? = {?ll? (y, y ? , e) : y, y ? = 1, . . . , K; e = 1, 2, 3}, where ?ll? (y, y ? , e) counts the
number of pairs of training examples belonging to classes y and y ? that reached leaves l and l? , and
also have the relationship type e ? namely, ascendent/descendent, touching, or far relationship.
Given ?l and ?ll? , we in a position to estimate the ratios of the proposal and posterior distributions,
defined in (3), which control the Metropolis-Hastings jumps in the SW-cut. Suppose two regions,
represented by their descriptors xi and xj , are labeled as yiA and yjA in state A, and yiB and yjB in
state B of one iteration of the SW-cut. Also, after passing xi and xj through T decision trees of the
learned RF, suppose they reached leaves lit and ljt in each tree t = 1, . . . , T . Then, we compute
PT
PT
B
B
B
pB
pB
p(Y = B|G)
t=1 ?lti ltj (yi , yj , eij )
ij
t=1 ?lti (yi )
i
. (4)
=
=
, for estimating
,
PT
PT
A
A
A
A
p(Y = A|G)
pA
p
t
t
t
i
ij
t=1 ?l (yi )
t=1 ?l l (yi , yj , eij )
i
i j
To estimate the ratio of the proposal distributions,
q(B?A)
q(A?B) ,
it is necessary to compute each individ-
ual probability pij , since the nominator and denominator of q(B?A)
q(A?B) do not contain the same set of
?cut? edges, CutA 6= CutB , as specified in (3). Thus, we compute
PT
q(B?A)
t=1 ?lt lt (yi , yj , eij )
for estimating
.
(5)
pij = PT i j
q(A?B)
t=1 k?lt kk?lt k
i
j
In the following, we first present our empirical evaluation of (RF)2 , and then derive the theoretical
performance bounds of a simple, two-class (RF)2 .
5 Results
(RF)2 is evaluated on the task of object recognition and segmentation on two benchmark datasets.
First, the MSRC dataset consists of 591 images showing objects from 21 categories [3]. We use the
4
standard split of MSRC into training and test images [3]. Second, the Street-Scene dataset consists
of 3547 images of urban environments, and has manually annotated regions [6, 19]. As in [6], one
fifth of the Street-Scene images are used for testing, and the rest, for training. Both datasets provide
labels of bounding boxes around object occurrences as ground truth.
Images are segmented using the multiscale segmentation algorithm of [17], which uses the perceptual significance of a region boundary, Pb ? [0, 100], as an input parameter. We vary Pb =
30:10:150, and thus obtain a hierarchy of regions for each image. A region is characterized by a
descriptor vector consisting of the following properties: (i) 30-bin color histogram in the CIELAB
space; (ii) 250-dimensional histogram of filter responses of the MR8 filter bank, and the Laplacian of
Gaussian filters computed at each pixel, and mapped to 250 codewords whose dictionary is obtained
by K-means over all training images; (iii) 128-dimensional region boundary descriptor measuring
oriented contour energy along 8 orientations of each cell of a 4 ? 4 grid overlaid over the region?s
bounding box; (iv) coordinates of the region?s centroid normalized to the image size. Regions extracted from training images are used for learning RF. A region that falls within a bounding box is
assigned the label of that box. If a region covers a number of bounding boxes of different classes,
it is added to the training set multiple times to account for each distinct label. We use the standard
random splits of training data to train 100 decision trees of RF, constructed in the top-down way.
The growth of each tree is constrained so its depth is less than 30, and its every leaf node contains at
least 20 training examples. To recognize and segment objects in a new test image, we first extract a
hierarchy of regions from the image by the segmentation algorithm of [17]. Then, we build the fully
connected CRF graph from the extracted regions (Sec. 2), and run the SW-cut inference (Sec. 4).
We examine the following three variants of (RF)2 : (RF)2 -1 ? The spatial relationships of regions,
eij , are not accounted for when computing pij in Eq. (4) and Eq. (5); (RF)2 -2 ? The region relationships touching and far are considered, while the ascendent/descendent relationship is not captured; and (RF)2 -3 ? All three types of region layout and structural relationships are modeled. In
this paper, we consider (RF)2 -3 as our default variant, and explicitly state when the other two are
used instead. Note that considering region layouts and structure changes only the class histograms
recorded by leaf nodes of the learned decision trees, but it does not increase complexity.
For quantitative evaluation, we compute the pixel-wise classification accuracy averaged across all
test images, and object classes. This metric is suitable, because it does not favor object classes that
occur in images more frequently. Tab. 1 and Tab. 2 show our pixel-wise classification accuracy
on MSRC and Street-Scene images. Table. 2 also compares the three variants of (RF)2 on MSRC
and Street-Scene images. The additional consideration of the region relationships touching and far
increases performance relative to that of (RF)2 -1, as expected. Our performance is the best when all
three types of region relationships are modeled. The tables also present the pixel-wise classification
accuracy of the state of the art CRF models [3,6,20,21]. Note that the methods of [6,21] additionally
use higher-level cues about the horizon location and 3D scene layout in their object recognition and
segmentation. As can be seen, (RF)2 outperforms the latest CRF models on both datasets.
Our segmentation results on example MSRC and Street-Scene images are shown in Fig. 5. Labels
of the finest-scale regions are depicted using distinct colors, since pixels get labels of the finest-scale
regions. As can be seen, (RF)2 correctly identifies groups of regions that belong to the same class.
Method
Aeroplane
Bicycle
Bird
Boat
Body
Book
Building
Car
Cat
Chair
Cow
Dog
Face
Flower
Grass
Road
Sheep
Sign
Sky
Tree
Water
Since the depth of each decision tree in RF is less than 30, the complexity of dropping an instance
through one tree is O(1), and through RF with T trees is O(T ). Our C-implementation of the RF-
[10]
[22]
[23]
[20]
[3]
Ours
88
82
83
100
60
100
91
72
79
98
75
99
34
24
30
11
19
42
49
18
27
63
7
69
54
66
67
55
62
68
93
93
80
78
92
95
30
49
69
73
62
74
82
74
70
88
63
88
56
75
68
11
54
77
74
51
45
80
15
80
68
97
78
74
58
99
54
35
52
43
19
61
77
87
84
72
74
91
90
74
47
72
63
93
71
88
96
96
97
99
31
78
78
76
86
78
64
97
80
90
50
99
82
36
61
92
35
93
84
78
95
50
83
96
69
79
87
76
86
90
58
54
67
61
53
68
Table 1: The average pixel-wise classification accuracy on the MSRC dataset. (RF)2 yields the best
performance for all object classes except one.
5
Figure 1: Our object recognition and segmentation results on example images from the MSRC
dataset (top two rows), and the Street-Scene dataset (bottom two rows). The figure depicts boundaries of the finest-scale regions found by the multiscale algorithm of [17], and the color-coded labels
of these regions inferred by (RF)2 . The results are good despite the presence of partial occlusion,
and changes in illumination and scale. (best viewed in color)
1
Method
(RF)2 -1
(RF)2 -2
(RF)2 -3
[20]
[21]
[6]
[3]
MSRC
69.5%?13.7%
80.2%?14.4%
82.9%?15.8%
70.0%
76.4%
N/A
70.0%
StreetScene
78.2%?0.5%
86.7%?0.5%
89.8%?0.6%
N/A
83.0%
84.2%
N/A
Test time
45s
31s
31s
N/A
N/A
N/A
10-30s
Table 2: The average pixel-wise classification accuracy and
average computation times on the MSRC and Street-Scene
datasets of the three variants of our approach with those of
the state-of-the-art CRF-based methods.
P(?)
0.8
0.6
0.4
0.2
0
0.15
0.2
0.25
0.3
?
0.35
0.4
0.45
0.5
Figure 2: The probability of classification error of (RF)2 , P (?), given
by Eq. (6) and Theorem 1 as a
function of the margin, ?, of RF.
guided SW-cut inference of CRF takes 10s?30s on a 2.40GHz PC with 3.48GB RAM for MSRC
and Street-Scene images. Table 2 shows that our average running times are comparable to those of
the other CRF methods that use approximate inference [3, 6, 20, 21].
6 Theoretical Analysis
We are interested in a theoretical explanation of the good performance of (RF)2 presented in the
previous section. In particular, we derive the theoretical performance bounds of a two-class (RF)2 ,
for simplicity. As explained in Sec. 3, we use the SW-cut for (RF)2 inference. The SW-cut iterates
the Metropolis-Hastings (MH) reversible jumps, and thus explores the state-space of solutions. An
MH jump between states A and B is controlled by the acceptance rate ?(A?B) which depends on
6
=B|G)
the ratios of the proposal and posterior distributions, q(B?A)p(Y
q(A?B)p(Y =A|G) . Below, we show that the
error made by the two-class RF in estimating these ratios is bounded. Our derivation of the error
bounds of RF is based on the theoretical analysis of evidence trees, presented in [15].
6.1 An Upper Error Bound of (RF)2
An error occurs along MH jumps when a balanced reversible jump is encountered, i.e., when there
q(B?A)
is no preference between jumping from state A to state B and reverse, q(A?B)
=1, and RF wrongly
=B|G)
predicts that the posterior distribution of state B is larger than that of A, p(Y
p(Y =A|G) ?1. In this
case, ?(A?B)=1, and the SW-cut will erroneously visit state B. We are interested in finding the
probability of this error, specified as
?
?
Y pB
Y pB
p(Y = B|G)
ij
i
?
? 1? .
(6)
P (?) = P
?1 =P?
A
A
p(Y = A|G)
p
p
i
ij
i?CC
j?N (i)
From Eq. (6), P (?) can be computed using the probability density function of a product of ranA
B
A
dom variables Zi = pB
i /pi ? [0, ?), and Wij = pij /pij ? [0, ?), within a specific connected component CC, where |CC|=n, i = 1, . . . , n, and j ? N (i). As we will prove in the
sequel, all random variables Zi have the same exponential distribution fZi (z)=?1 exp(??1 z).
Also, we will prove that all random variables Wij have Q
the same exponential distribution
n
fWij (w)=?2 exp(??2 w). Then, it follows that the product Z= i=1 Zi =(Zi )n has the distribution
Q
Q
1?n
1
n
fZ (z)= ?n1 z n exp(??1 z n ). Also, the product W = i=1 j?N (i) Wij =(Wij )nk ?(Wij )n has
1?n
1
the distribution fW (w)= ?n2 w n exp(??2 w n ), where we approximate that the number of edges
within CC is the same as the number of nodes in CC, as a result of the probabilistic ?cutting? of
graph edges by the SW-cut algorithm. Given fZ (z) and fW (w), from Eq. (6), we analytically derive the probability that (RF)2 makes a wrong prediction, P (?) = P (Z ? W ? 1), as stated in the
following theorem.
Theorem 1. The probability that (RF)2 makes a wrong prediction is P (?)=P (Z ?W ? 1)=?K1 (?),
where Z?[0, ?) and W ?[0, ?) are random variables characterized by the probability density func1?n
1?n
1
1
tions fZ (z)= ?n1 z n exp(??1 z n ) and fW (w)= ?n2 w n exp(??2 w n ), with parameters ?1 and
?
?2 , and where K1 is the modified Bessel function of the second kind, and ? = 2 ?1 ?2 .
R?
1
?2 1?n
h n K0 (?h 2n ), where
Proof. Define H = Z ? W . Then, fH (h)= 0 1z fZ (z)fW ( hz )dz = 2n
K0 is the modified Bessel function of the second kind. It follows that P (?) = P (H?1) =
R1
1? 0 fH (h)dh = ?K1 (?).
As we will show in the following section, the parameter ? is directly proportional to a measure
of accuracy of RF predictions, referred to as probabilistic margin. Since K1 (?) is a decreasing
function, it follows that the probability that (RF)2 makes a wrong prediction is upper bounded, and
decreases as the probabilistic margin of RF increases.
6.2 A Mathematical Model of RF Performance
In this section, we derive that the RF estimates of the ratios of posteriors Zi and Wij have the
exponential distribution. We consider a binary classification problem, for simplicity, where training
and test instances may have positive and negative labels. We assume that the two classes are balanced
P (y=+1) = P (y=?1) = 1/2. We define ? to be a fraction of pairs of instances that have certain
relationship, corresponding to a particular spatial or structural relationship between pairs of regions,
defined in Sec. 2. The learning algorithm that creates RF is not modeled. Instead, we assume that
the learned decision trees have the following properties. Each leaf node of a decision tree: (i) stores
a total of C training instances that reach the leaf; and (ii) has a probabilistic margin ? ? [0, 1/2).
By margin, we mean that in every leaf reached by C training instances a fraction of 1/2 + ? of the
training instances will belong to one class (e.g., positive), and fraction 1/2 ? ? of them will belong
to the other class (e.g., negative). We say that a leaf is positive if a majority of the training instances
collected by the leaf is positive, or otherwise, we say that the leaf is negative. It is straightforward
to show that when a positive instance is dropped through one of the decision trees in RF, it will
7
reach a positive leaf with probability 1/2 + ?, and a negative leaf with probability 1/2 ? ? [15].
Similarly holds for negative instances. A new test instance is classified by dropping it through T
decision trees, and taking a majority vote of the labels of all C ? T training instances stored in the
leaves reached by the test instance. We refer to this classification procedure as evidence voting [15],
as opposed to decision voting over the leaf labels in the standard RF [13]. The following proposition
states that the probability that evidence voting misclassifies an instance, P (?1 ), is upper bounded.
Proposition 1. The probability that RF with T trees, where every leaf stores C training instances,
incorrectly classifies an instance is upper bounded, P (?1 )? exp(?8CT ? 4 ).
Proof. Evidence voting for labeling an instance can be formalized as drawing a total of C?T independent Bernoulli random variables, with the success rate p1 , whose outcomes are {?1, +1}, where
+1 is received for correct, and ?1 for incorrect labeling of the instance. Let S1 denote a sum of
these Bernoulli random variables. Thus, a positive instance is incorrectly labeled if S1 ?0, and a negative instance is misclassified if S1 >0. Since the two classes are balanced,
by applying the standard
Chernoff bound, we obtain P (?1 )=P (S1 ?0)? exp ?2CT (p1 ?1/2)2 . The success rate p1 can be
derived as follows. When a positive (negative) instance is dropped through a decision tree, it will fall
in a positive (negative) leaf with probability 1/2 + ?, where it will be labeled as positive (negative)
with probability 1/2+?; else, the positive (negative) instance will be routed to a negative (positive)
leaf with probability 1/2??, where it will be labeled as positive (negative) with probability 1/2??.
Consequently, the probability that an instance is correctly labeled, i.e., the success rate of the associated Bernoulli random variable, is p1 =(1/2+?)(1/2+?)+(1/2??)(1/2??)=1/2 + 2? 2 .
Evidence voting is also used for labeling pairs of instances. The probability that evidence voting
misclassifies a pair of test instances, P (?2 ), is upper bounded, as stated in Proposition 2.
Proposition 2. Given RF as in Proposition 1, the probability that RF incorrectly labels a pair of
instances having a certain relationship is upper bounded, P (?2 ) ? exp(?8C 2 T ? 4 ? 8 ).
Proof. Evidence voting for labeling a pair of instances can be formalized as drawing a total of
C 2 T independent Bernoulli random variables, with success rate p2 , whose outcomes are {?1, +1},
where +1 is received for correct, and ?1 for incorrect labeling of the instance
pair. Let S2 denote
a sum of these Bernoulli random variables. Then, P (?2 )=P (S2 ?0)? exp ?2C 2 T (p2 ?1/2)2 .
Similar to the proof of Proposition 1, by considering three possible cases of correct labeling of a
pair of instances when dropping the pair through a decision tree, the success rate p2 can be derived
as p2 =?(1/2+? 2 )(1/2+?? 2 )+?(1/2?? 2 )(1/2??? 2 )+(1??)(1/2) = 1/2+2? 2 ? 4 , where ? is a
fraction of pairs of instances that have the same type of relationship.
From Proposition 1, it follows that the probability that RF makes a wrong prediction about the posterior ratio of an instance is upper bounded, P (Zi ? 1) = P (?1 ) = exp(?8CT ? 4), ?i ? CC. This
gives the probability density function fZi (z) = ?1 exp(??1 z), where ?1 = 8CT ? 4 . In addition,
From Proposition 2, it follows that the probability that RF makes a wrong prediction about the posterior ratio of a pair of instances is upper bounded, P (Wij ? 1) = P (?2 ) = exp(?8C 2 T ? 4 ? 8 ),
?i ? CC and j ? N (i). This gives the probability density function fWij (w) = ?2 exp(??2 w),
where ?2 = 8C 2 T ? 4 ? 8 . By plugging these results in Theorem 1, we complete the derivation of the
upper error bound of (RF)2 . From Theorem 1, P (?) decreases when any of the following parameters
increases: C, T , ?, and ?. Fig. 2 shows the influence of ? on P (?), when the other parameters are
fixed to their typical values: C = 20, T = 100, and ? = 0.1.
7 Conclusion
We have presented (RF)2 ? a framework that uses the random forest (RF) for the MCMC-based
inference of a conditional random field (CRF). Our key idea is to employ RF to directly compute
the ratios of the proposal and posterior distributions of states visited along the Metropolis-Hastings
reversible jumps, instead of estimating each individual distribution, and thus improve the convergence rate and accuracy of the CRF inference. Such a non-parametric formulation of CRF and its
inference has been demonstrated to outperform, in terms of computation time and accuracy, existing
parametric CRF models on the task of multiclass object recognition and segmentation. We have also
derived the upper error bounds of the two-class RF and (RF)2 , and showed that the classification
error of (RF)2 decreases as any of the following RF parameters increases: the number of decision
trees, the number of training examples stored in every leaf node, and the probabilistic margin.
8
References
[1] L.-J. Li, R. Socher, and L. Fei-Fei, ?Towards total scene understanding: Classification, annotation and
segmentation in an automatic framework,? in CVPR, 2009.
[2] X. He, R. S. Zemel, and M. A. Carreira-Perpinan, ?Multiscale Conditional Random Fields for image
labeling,? in CVPR, 2004, pp. 695?702.
[3] J. Shotton, J. Winn, C. Rother, and A. Criminisi, ?Textonboost: Joint appearance, shape and context
modeling for multi-class object recognition and segmentation,? in ECCV, 2006, pp. 1?15.
[4] J. Verbeek and B. Triggs, ?Scene segmentation with CRFs learned from partially labeled images,? in
NIPS, 2007.
[5] A. Torralba, K. P. Murphy, and W. T. Freeman, ?Contextual models for object detection using boosted
random fields,? in NIPS, 2004.
[6] S. Gould, T. Gao, and D. Koller, ?Region-based segmentation and object detection,? in NIPS, 2009.
[7] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie, ?Objects in context,? in ICCV,
2007.
[8] N. Payet and S. Todorovic, ?From a set of shapes to object discovery,? in ECCV, 2010.
[9] S. Todorovic and N. Ahuja, ?Unsupervised category modeling, recognition, and segmentation in images,?
IEEE TPAMI, vol. 30, no. 12, pp. 1?17, 2008.
[10] J. J. Lim, P. Arbelaez, C. Gu, and J. Malik, ?Context by region ancestry,? in ICCV, 2009.
[11] J. Sivic, B. C. Russell, A. Zisserman, W. T. Freeman, and A. A. Efros, ?Unsupervised discovery of visual
object class hierarchies,? in CVPR, 2008.
[12] J. Lafferty, A. McCallum, and F. Pereira, ?Conditional random fields: Probabilistic models for segmenting
and labeling sequence data,? in ICML, 2001, pp. 282?289.
[13] L. Breiman, ?Random forests,? Mach. Learn., vol. 45, no. 1, pp. 5?32, 2001.
[14] J. Gall and V. Lempitsky, ?Class-specific hough forests for object detection,? in CVPR, 2009.
[15] G. Martinez-Munoz, N. Larios, E. Mortensen, W. Zhang, A. Yamamuro, R. Paasch, N. Payet, D. Lytle,
L. Shapiro, S. Todorovic, A. Moldenke, and T. Dietterich, ?Dictionary-free categorization of very similar
objects via stacked evidence trees,? in CVPR, 2009.
[16] Y. Lin and Y. Jeon, ?Random forests and adaptive nearest neighbors,? Journal of the American Statistical
Association, pp. 101?474, 2006.
[17] C. F. P. Arbelaez, M. Maire and J. Malik, ?From contours to regions: An empirical evaluation,? in CVPR,
2009.
[18] A. Barbu and S.-C. Zhu, ?Graph partition by Swendsen-Wang cuts,? in ICCV, 2003, p. 320.
[19] S. Bileschi and L. Wolf, ?A unified system for object detection, texture recognition, and context analysis
based on the standard model feature set,? in BMVC, 2005.
[20] C. Galleguillos, B. McFee, S. Belongie, and G. R. G. Lanckriet, ?Multi-class object localization by combining local contextual interactions,? in CVPR, 2010.
[21] S. Gould, R. Fulton, and D. Koller, ?Decomposing a scene into geometric and semantically consistent
regions,? in ICCV, 2009.
[22] J. Shotton, M. Johnson, and R. Cipolla, ?Semantic texton forests for image categorization and segmentation,? in CVPR, 2008.
[23] Z. Tu and X. Bai, ?Auto-context and its application to high-level vision tasks and 3D brain image segmentation,? IEEE TPAMI, vol. 99, 2009.
9
| 4140 |@word triggs:1 yja:1 textonboost:1 bai:1 configuration:1 contains:1 ours:1 outperforms:2 existing:2 contextual:3 finest:3 wiewiora:1 partition:3 informative:1 shape:3 enables:1 grass:1 cue:3 leaf:30 pursued:1 intelligence:1 selected:1 mccallum:1 record:2 colored:1 provides:3 iterates:2 node:28 location:3 completeness:1 preference:1 zhang:1 mathematical:1 along:3 constructed:1 incorrect:2 consists:2 prove:2 combine:2 manner:1 theoretically:1 expected:1 p1:4 examine:1 frequently:1 multi:3 brain:1 freeman:2 globally:1 detects:1 decreasing:1 considering:2 provided:2 estimating:7 matched:1 bounded:8 classifies:1 kind:2 minimizes:1 spends:1 unified:2 finding:1 guarantee:1 quantitative:1 every:7 yjb:1 voting:8 sky:1 growth:1 classifier:1 wrong:5 control:1 converse:1 segmenting:1 positive:13 dropped:3 engineering:1 bind:1 understood:1 local:2 despite:1 mach:1 bird:1 initialization:1 collect:3 challenging:2 statistically:1 averaged:1 yj:6 testing:1 procedure:1 maire:1 mortensen:1 mcfee:1 empirical:3 dictate:1 vedaldi:1 road:1 get:1 cannot:1 close:1 layered:1 wrongly:1 context:7 applying:1 influence:1 map:3 demonstrated:1 dz:1 crfs:2 layout:5 latest:1 straightforward:1 formulate:1 simplicity:2 identifying:1 formalized:2 pure:1 factored:1 rule:2 estimator:1 coordinate:1 pt:6 suppose:2 hierarchy:3 barbu:1 us:4 gall:1 lanckriet:1 pa:1 element:1 recognition:15 cut:28 predicts:1 labeled:8 observed:1 bottom:1 electrical:1 wang:3 capture:1 region:46 connected:5 decrease:3 russell:1 principled:2 balanced:3 environment:1 complexity:3 dom:1 segment:2 localization:2 creates:1 observables:1 gu:1 joint:3 mh:13 k0:2 various:1 represented:1 cat:1 derivation:2 stacked:2 train:1 distinct:4 shortcoming:1 describe:2 artificial:1 zemel:1 labeling:8 neighborhood:1 outcome:2 whose:4 posed:1 larger:1 cvpr:8 say:3 drawing:2 otherwise:1 favor:1 jointly:1 tpami:2 sequence:1 interaction:1 product:3 neighboring:1 tu:1 combining:1 poorly:1 ltj:1 convergence:3 individ:1 r1:1 categorization:2 converges:1 object:37 tions:1 derive:9 ij:7 nearest:2 school:1 received:2 eq:9 subregion:1 p2:4 strong:1 involves:1 guided:1 annotated:1 attribute:2 filter:4 correct:3 criminisi:1 adjacency:1 bin:1 assign:1 proposition:8 strictly:1 hold:1 sufficiently:1 around:2 swendsen:3 ground:1 considered:1 exp:14 overlaid:1 bicycle:1 predict:2 efros:1 vary:1 dictionary:2 torralba:1 fh:2 estimation:3 label:21 visited:1 weighted:1 gaussian:1 modified:2 breiman:2 boosted:1 probabilistically:6 structuring:1 derived:3 bernoulli:5 contrast:1 centroid:1 inference:22 dependent:1 typically:2 entire:2 accept:1 hidden:1 relation:1 ancestor:1 koller:2 wij:7 misclassified:1 selects:1 semantics:1 interested:2 pixel:10 arg:1 among:2 orientation:1 classification:10 aforementioned:1 misclassifies:2 spatial:6 art:4 initialize:1 constrained:1 field:10 aware:1 once:1 having:2 eliminated:1 manually:1 chernoff:1 represents:1 lit:1 nadia:1 unsupervised:2 icml:1 photometric:2 employ:1 randomly:3 oriented:1 recognize:1 individual:2 relabeling:1 murphy:1 intended:1 connects:1 consisting:1 occlusion:1 jeon:1 n1:2 detection:6 organization:3 acceptance:5 evaluation:5 sheep:1 pc:1 chain:2 edge:17 partial:1 necessary:1 jumping:1 tree:31 conduct:1 hough:2 iv:1 re:2 theoretical:12 instance:31 modeling:3 cover:3 measuring:1 assignment:2 rabinovich:1 loopy:1 subset:2 conducted:2 johnson:1 stored:4 connect:2 dependency:1 eec:1 combined:1 density:5 fundamental:1 explores:1 sequel:1 probabilistic:6 recorded:1 opposed:1 choose:1 yia:1 book:1 resort:2 american:1 actively:1 li:1 account:3 potential:3 sec:4 oregon:1 descendent:4 explicitly:1 depends:5 view:1 analyze:1 tab:2 reached:6 bayes:1 annotation:1 contribution:1 accuracy:9 descriptor:5 conducting:1 efficiently:1 reserved:1 correspond:1 ofthe:1 ensemble:2 yield:1 bayesian:1 cc:21 classified:1 detector:2 explain:1 reach:4 energy:2 pp:6 associated:4 proof:4 sampled:3 dataset:6 popular:1 color:12 knowledge:1 car:1 lim:1 segmentation:21 formalize:1 organized:1 coloring:2 higher:2 specify:1 response:2 zisserman:1 bmvc:1 ljt:1 done:1 box:9 evaluated:1 formulation:1 stage:1 until:2 hastings:5 receives:1 touch:2 multiscale:6 reversible:5 propagation:1 defines:1 grows:1 facilitate:1 effect:1 building:1 contain:1 normalized:1 dietterich:1 galleguillos:2 analytically:1 assigned:1 semantic:2 ll:4 crf:28 demonstrate:1 complete:1 image:35 wise:5 consideration:1 common:1 insensitive:1 belong:3 interpretation:2 he:1 association:1 refer:1 munoz:1 gibbs:1 smoothness:1 rd:1 automatic:1 consistency:1 msrc:10 grid:1 similarly:1 moving:1 access:1 surface:1 posterior:20 recent:1 touching:4 showed:1 driven:1 reverse:1 store:3 dimensionally:1 certain:2 binary:1 success:5 yi:13 seen:3 captured:1 additional:1 recognized:1 bessel:2 ii:3 multiple:4 segmented:1 faster:1 characterized:5 lin:1 visit:1 coded:1 plugging:1 laplacian:1 controlled:1 prediction:7 variant:4 verbeek:1 denominator:1 vision:2 metric:1 histogram:10 represent:2 iteration:2 texton:1 cell:1 proposal:13 whereas:2 lbp:1 background:1 addition:1 addressed:1 winn:1 else:1 rest:4 unlike:1 hz:1 elegant:1 lafferty:1 nominator:1 near:1 structural:2 presence:1 split:2 iii:1 shotton:2 xj:5 zi:6 architecture:1 competing:1 cow:1 idea:3 regarding:1 multiclass:3 bottleneck:1 whether:1 utility:1 gb:1 passed:1 aeroplane:1 routed:1 passing:1 todorovic:4 reassign:1 adequate:1 aimed:1 nonparametric:1 category:2 continuation:1 fz:4 outperform:1 shapiro:1 payet:3 sign:1 correctly:2 dropping:3 vol:3 group:1 key:3 four:2 nevertheless:1 pb:9 drawn:1 urban:1 lti:2 ram:1 graph:16 fraction:4 sum:2 run:1 parameterized:1 patch:1 draw:1 decision:18 comparable:1 capturing:1 bound:11 ct:4 replaces:1 fzi:2 encountered:2 occur:2 fei:2 scene:17 encodes:1 tag:1 generates:1 declared:1 erroneously:1 min:2 chair:1 relatively:3 gould:2 structured:2 according:1 belonging:2 across:1 separability:1 metropolis:5 s1:4 maxy:1 explained:1 iccv:4 resource:1 remains:1 count:2 flip:1 end:1 decomposing:1 hierarchical:2 occurrence:3 original:1 denotes:1 clustering:1 top:2 running:1 graphical:3 sw:17 commits:1 k1:4 build:3 move:2 malik:2 added:2 occurs:1 codewords:1 parametric:5 strategy:1 fulton:1 regulated:1 separate:1 mapped:1 arbelaez:2 majority:3 street:8 collected:1 reason:1 water:1 rother:1 modeled:4 relationship:15 kk:1 ratio:28 difficult:1 stated:2 negative:12 implementation:1 motivates:1 unknown:1 upper:10 datasets:5 markov:1 benchmark:2 incorrectly:3 inferred:1 namely:2 required:1 pair:16 specified:3 dog:1 orst:1 sivic:1 coherent:1 accepts:1 learned:4 nip:3 usually:1 below:2 flower:1 yc:4 rf:86 explanation:1 belief:1 suitable:1 metropolishastings:1 ual:1 boat:1 zhu:1 representing:1 improve:1 identifies:1 extract:1 auto:1 prior:2 geometric:3 oregonstate:1 review:1 understanding:1 discovery:2 relative:1 embedded:1 fully:2 expect:1 discriminatively:1 proportional:1 conjoint:1 pij:10 consistent:3 principle:2 thresholding:1 bank:1 intractability:1 share:1 pi:3 row:2 eccv:2 changed:1 accounted:1 free:1 neighbor:3 fall:3 characterizing:1 face:1 subimage:1 fifth:1 taking:1 ghz:1 boundary:5 depth:2 default:1 contour:2 computes:1 stuck:1 made:1 jump:12 adaptive:1 far:5 approximate:4 cutting:2 keep:1 decides:1 containment:1 belongie:2 xi:10 ancestry:1 table:5 additionally:2 learn:2 robust:1 forest:12 bileschi:1 domain:1 significance:1 main:2 motivation:1 bounding:8 s2:2 n2:2 martinez:1 body:1 fig:2 referred:1 cielab:1 depicts:1 ahuja:1 position:1 yib:1 pereira:1 exponential:3 governed:1 perceptual:4 perpinan:1 down:3 theorem:5 specific:2 showing:1 evidence:13 grouping:1 intractable:2 socher:1 texture:2 illumination:1 conditioned:1 horizon:2 margin:6 nk:1 depicted:1 lt:4 eij:8 likely:1 appearance:1 gao:1 visual:2 sinisa:2 rana:1 partially:1 cipolla:1 wolf:1 truth:1 satisfies:1 extracted:3 dh:1 conditional:5 lempitsky:1 goal:2 viewed:1 consequently:3 towards:1 ownership:1 hard:1 change:2 fw:4 typical:1 except:2 operates:1 carreira:1 sampler:1 semantically:1 called:2 total:6 accepted:1 experimental:1 vote:1 intact:1 indicating:2 latter:1 mcmc:2 correlated:1 |
3,469 | 4,141 | Gaussian Process Preference Elicitation
Edwin V. Bonilla, Shengbo Guo, Scott Sanner
NICTA & ANU, Locked Bag 8001, Canberra ACT 2601, Australia
{edwin.bonilla, shengbo.guo, scott.sanner}@nicta.com.au
Abstract
Bayesian approaches to preference elicitation (PE) are particularly attractive due
to their ability to explicitly model uncertainty in users? latent utility functions.
However, previous approaches to Bayesian PE have ignored the important problem of generalizing from previous users to an unseen user in order to reduce the
elicitation burden on new users. In this paper, we address this deficiency by introducing a Gaussian Process (GP) prior over users? latent utility functions on the
joint space of user and item features. We learn the hyper-parameters of this GP on
a set of preferences of previous users and use it to aid in the elicitation process for
a new user. This approach provides a flexible model of a multi-user utility function, facilitates an efficient value of information (VOI) heuristic query selection
strategy, and provides a principled way to incorporate the elicitations of multiple
users back into the model. We show the effectiveness of our method in comparison
to previous work on a real dataset of user preferences over sushi types.
1
Introduction
Preference elicitation (PE) is an important component of interactive decision support systems that
aim to make optimal recommendations to users by actively querying their preferences. A crucial
requirement for PE systems is that they should be able to make optimal or near optimal recommendations based only on a small number of queries. In order to achieve this, a PE system should (a)
maintain a flexible representation of the user?s utility function; (b) handle uncertainty in a principled
manner; (c) select queries that allow the system to discriminate amongst the highest utility items;
and (d) allow for the incorporation of prior knowledge from different sources.
While previous Bayesian PE approaches have addressed (a), (b) and (c), they appear to ignore an
important aspect of (d) concerning generalization from previous users to a new unseen user in order
to reduce the elicitation burden on new users. In this paper we propose a Bayesian PE approach
to address (a)?(d), including generalization to new users, in an elegant and principled way. Our
approach places a (correlated) Gaussian process (GP) prior over the latent utility functions on the
joint space of user features (T , mnemonic for tasks) and item features (X ). User preferences over
items are then seen as drawn from the comparison of these utility function values.
The main advantages of our GP-based Bayesian PE approach are as follows. First, due to the nonparametric Bayesian nature of GPs, we have a flexible model of the user?s utility function that can
handle uncertainty and incorporate evidence straightforwardly. Second, by having a GP over the
joint T ? X space, we can integrate prior knowledge on user similarity or item similarity, or simply
have more general-purpose covariances whose parameterization can be learned from observed preferences of previous users (i.e. achieving integration of multi-user information). Finally, our approach
draws from concepts in the Gaussian process optimization and decision-making literature [1, 2] to
propose a Bayesian decision-theoretic PE approach. Here the required expected value of information computations can be derived in closed-form to facilitate the selection of informative queries and
determine the highest utility item from the available item set as quickly as possible.
1
In this paper we focus on pairwise comparison queries for PE, which are known to have low cognitive
load [3, 4]. In particular, we assume a likelihood model of pairwise preferences that factorizes over
users and preferences and a GP prior over the latent utility functions correlates users and items.
2
Problem Formulation
Let x denote a specific item (or product) that is described by a set of features x and t denote a user
(mnemonic for task) that can be characterized with features t. For a set of items X = {x1 , . . . , xN }
and users T = {t1 , . . . , tM } we are given a set of training preference pairs:
n
o
(j)
(j)
D = (t(j) , xk1 xk2 )|k = 1, . . . , Kj ; k1 , k2 ? {1, . . . , N }; j = 1 . . . , M ,
(1)
(j)
(j)
where xk1 xk2 denotes that we have observed that user j prefers item k1 over item k2 and Kj is
the number of preference relations observed for user j.
The preference elicitation problem is that given a new user, described by a set of features t? , we aim
to determine (or elicit) what his/her preferences (or favourite items) are by asking a small number
def
of queries of the form qij = xi xj , meaning that he/she will prefer item i over item j. Ideally, we
would like to obtain the best user preferences with the smallest number of possible queries.
The key idea of this paper is that of learning a Gaussian process (GP) model over users? latent utility
functions and use this model in order to drive the elicitation process of a new user. Due to the
non-parametric Bayesian nature of the GPs, this allows us to have a powerful model of the user?s
utility function and to incorporate the evidence (i.e. the responses the user gives to our queries) in a
principled manner. Our approach directly exploits: (a) user-relatedness, i.e. that users with similar
characteristics may have similar preferences; (b) items? similarities and (c) the value of information
of obtaining a response to a query in order to elicit the preferences of the user.
3
Likelihood Model
Our likelihood model considers that the users? preference relationships are conditionally independent given the latent utility functions. In other words, the probability of a user t preferring item x
over item x0 given their utility functions is:
t
p(xt x0 |f (t, x), f (t, x0 ), ?) = I[f (t, x) ? f (t, x0 ) ? ?]
with
p(?) = N (?|0, ? 2 ),
(2)
where I[?] is an indicator function that is 1 if the condition [?] is true and 0 otherwise; and ? 2 is the
variance of the normally distributed variable ? that dictates how different the latent functions should
be for the corresponding relation to hold. Hence:
Z ?
t
0t
0
p(x x |f (t, x), f (t, x )) =
I[f (t, x) ? f (t, x0 ) ? ?]N (?|0, ? 2 )d?
(3)
??
f (t, x) ? f (t, x0 )
,
(4)
=?
?
where ?(?) is the Normal cumulative distribution function (cdf). The conditional data-likelihood is
then given by:
p(D|f ) =
Kj
M Y
Y
?(zkj )
with zkj =
j=1 k=1
4
1 (j) (j)
(j)
f (t , xk1 ) ? f (t(j) , xk2 ) .
?
(5)
Modeling User Dependencies with a GP Prior
As mentioned above, we model user (and item) dependencies via the user latent utility functions,
which are assumed to be drawn from a GP prior that accounts for user similarity and item similarity
directly:
f (t, x) ? GP 0, ?t (t, t0 )?x (x, x0 ) ,
(6)
2
where ?t (?, ?) is a covariance function on user-descriptors t and ?x (?, ?) is a covariance function
on item features x. We will denote the parameters of these covariance functions (so-called hyperparameters) by ? t and ? x . (These types of priors have been considered previously in the regression
setting, see e.g. [5].)
Additionally, let f be the utility function values for all training users at all training input locations
(i.e. items) so that f = [f (t(1) , x(1) ), . . . f (t(1) , x(N ) ), . . . , f (t(M ) , x(1) ), . . . , f (t(M ) , x(N ) )]T and
F be the N ? M matrix for which the jth column corresponds to the latent values for the jth user
at all input points such that f = vec F. Hence:
f ? N (0, ?)
with
? = Kt ? Kx ,
(7)
where Kt is the covariance between all the training users, Kx is the covariance between all the
training input locations, and ? denotes the Kronecker product. Note that dependencies between
users are not arbitrarily imposed but rather they will be learned from the available data by optimizing
the marginal likelihood. (We will describe the details of hyper-parameter learning in section 7.)
5
Posterior and Predictive Distributions
Given the data in (1) and the prior over the latent utility functions in equation (6), we can obtain the
posterior distribution:
p(D|f , ?)p(f |?)
P (f |D, ?) =
,
(8)
p(D|?)
where we have emphasized the dependency on the hyper-parameters ? that include
? t , ? x and ? 2
R
and where p(D|?) is the marginal likelihood (or evidence) with p(D|?) = p(D|f , ?)p(f |?)df .
The non-Gaussian nature of the conditional likelihood term (given in equation (5)) makes the above
integral analytically intractable and hence we will require approximations. In this paper we will focus on analytical approximations and more specifically, we will approximate the posterior p(f |D, ?),
and the evidence, using the Laplace approximation.
The Laplace method approximates the true posterior with a Gaussian: p(f |D, ?) ? N (f |?f , A?1 ),
where ?f = argmaxf p(f |D, ?) = argmaxf p(D|f , ?)p(f |?) and A is the Hessian of the negative
log-posterior evaluated at ?f . Hence we consider the unnormalized expression p(D|f , ?)p(f |?) and,
omitting the terms that are independent of f , we focus on the maximization of the following expression:
?(f ) =
Kj
M X
X
j=1 k=1
1
log ?(zkj ) ? f T ??1 f .
2
(9)
Using Newton?s method we obtain the following iterative update:
Kj
M X
X
? 2 log ?(zkj )
?log p(D|f , ?)
f new = (W + ??1 )?1
+ Wf with Wpq = ?
.
?f
?fp ?fq
j=1
(10)
k=1
Once we have found the maximum posterior ?f by using the above iteration we can show that:
p(f |D) ? N (f |?f , (W + ??1 )?1 ).
5.1
(11)
Predictive Distribution
In order to set-up our elicitation framework we will also need the predictive distribution for a fixed
test user t? at an unseen pair x1? , x2? . This is given by:
Z
p(f? |D) = p(f? |f )p(f |D)df
(12)
= N (f? |?? , C? ),
(13)
with:
?? = kT? ??1 ?f
and
C? = ?? ? kT? (? + W?1 )?1 k? ,
3
(14)
where ? is defined as in equation (7) and:
k? = kt? ? kx?
T
kt? = ?t (t? , t(1) ), . . . ?t (t? , t(M ) )
x 1 (1)
T
? (x? , x ), . . . ?x (x1? , x(N ) )
x
k? = x 2 (1)
? (x? , x ), . . . ?x (x2? , x(N ) )
t
? (t? , t? )?x (x1? , x1? ) ?t (t? , t? )?x (x1? , x2? )
?? = t
.
? (t? , t? )?x (x2? , x1? ) ?t (t? , t? )?x (x2? , x2? )
6
(15)
(16)
(17)
(18)
Gaussian Process Preference Elicitation Framework
Now we have the main components to set up our preference elicitation framework for a test user
characterized by features t? . Our main objective is to use the previously seen data (and the corresponding learned hyper-parameters) in order to drive the elicitation process and to incorporate the
information obtained from the user?s responses back into our model in a principled manner. Our
main requirement is a function that dictates the value of making a query qij . In other words, we aim
at trading-off the expected actual utility of the items involved in the query and the information these
items will provide regarding the user?s preferences. This is the exploration-exploitation dilemma,
usually seen in optimization and reinforcement learning problems. We can address this issue by
computing the expected value of information (EVOI, [2]) of making a query involving items i and
j. Before defining the EVOI, we will make use of the concept of expected improvement, a measure
that is commonly used in optimization methods based on response surfaces (see e.g. [1]).
6.1
Expected Improvement
We have seen in equation (13) that the predictive distribution for the utility function on a test user
t? on item x follows a Gaussian distribution:
f (t? , x|D, ?) ? N (?? (t? , x), s2? (t? , x)),
(19)
2
where ?? (t? , x) and s? (t? , x) can be obtained by using (the marginalized version of) equation
(14). Let us assume that, at any point during the elicitation process we have an estimate of the
utility of the best item and let us denote it by f best . If we define the predicted improvement at x as
I = f (t? , x|D, ?) ? f best then the expected improvement (EI) of recommending item x (for a fixed
user t? ) instead of recommending the best item xbest is given by:
Z ?
EI(x|D) =
Ip(I)dI = s? (t? , x)[z 0 ?(z 0 ) + ?(z 0 )],
(20)
0
where z 0 = (?? (t? , x) ? f best )/s? (t? , x), ?(?) is the Normal cumulative distribution function (cdf)
and ?(?) is the Normal probability density function (pdf). Note that, for simplicity in the notation,
we have omitted the dependency of EI(x|D) on the user?s features t? . Hence the maximum expected
improvement (ME) under the current observed data D is:
MEI(D) = max EI(x|D).
(21)
x
6.2
Expected Value of Information
Now we can define the expected value of information (EVOI) as the expected gain in improvement
that is obtained by adding a query involving a particular pairwise relation. Thus, the expected value
of information of obtaining the response for the queries involving items x? i , x? j with corresponding
utility values f ? = (f ? (t? , xi? ), f ? (t? , xj? ))T is given by:
*
+
X
EVOI(D, i, j) = ?MEI(D) +
p(qij |f ? , D)MEI(D ? qij )
(22)
qij
p(f ? |D)
D
E
= ?MEI(D) + p(x? i x? j |f ? , D)
MEI(D ? {x? i x? j })
p(f ? |D)
D
E
+ p(x? j x? i |f ? , D)
MEI(D ? {x? j x? i }),
p(f ? |D)
4
(23)
Algorithm 1 Gaussian Process Preference Elicitation
Require: hyper-parameters ? x , ? t , ? ? {learned from M previous users} and corresponding D
repeat
for all candidate pairs (i, j) do
Compute EVOI(i, j, D, ?f , W) {equation (23)}
end for
(i? , j ? ) ? argmaxi,j EVOI(i, j) {best pair}
Remove (i? , j ? ) from candidate list
if qi? ,j ? is true then {ask user and set true preference}
(itrue , j true ) ? (i? , j ? )
else
(itrue , j true ) ? (j ? , i? )
end if
D ? D ? (tM +1 , xitrue xj true ) {Expand D and get D+ }
Update ?f , W {i.e. P (f |D) as in equation (10)}
until Satisfied
where
D
E
p(x? i x? j |f ? , D)
p(f ? |D)
= p(x? i x? j |D)
Z
=
p(x? i x? j |f ? , D)p(f ? |D)df ?
f?
Z Z
=
I[fi? ? fj? ? ?]N (?|0, ? 2 )N (f? |?? , C? )d?df ?
f?
=?
(24)
(25)
(26)
?
??i ? ??j
Ci,i ? Cj,j ? 2Ci,j ? ? 2
,
(27)
and ?? and C? as defined in (14). Note that in our model p(x? j x? i |D) = 1 ? p(x? i x? j |D).
As mentioned above, f best can be thought of as an estimate of the utility of the best item as its true
utility is unknown. In practice we maintain our beliefs over the utilities of the items p(f |D+ ) for the
training users and the test user, where D+ denotes the data extended by the set of seen relationships
c+ i,M +1 , where F
c+
on the test user (which is initially empty). Hence, we can set-up f best = maxi F
is the matrix containing the mean estimates of the latent utility function distribution given by the
Laplace approximation in equation (9). Alternatively, we can draw samples from such a distribution
and apply the max operator.
In order to elicit preferences on a new user we simply select a query so that it maximizes the expected
value of information EVOI as defined in equation (23). A summary of our approach is presented
in algorithm 1. We note that although, in principle, one could also update the hyper-parameters
based on the data provided by the new user, we avoid this in order to keep computations manageable
at query time. The reasoning being that, implicitly, we have learned the utility functions over all
users and we represent the utility of the test user (explicitly) on demand, updating our beliefs to
incorporate the information provided by the user?s responses.
7
Hyper-parameter Learning
Throughout this paper we have assumed that we have learned a Gaussian process model for the
utility functions over users and items based upon previously seen preference relations. We refer to
the hyper-parameters of our model as the hyper-parameters ? t and ? x of the covariance functions
(?t and ?x respectively) and ?? = log ?, where ? 2 is the ?noise? variance.
Although it is entirely possible to use prior knowledge on what these covariance functions are (or
their corresponding parameter settings) for the specific problem under consideration, in many practical applications such prior knowledge is not available and one requires to tune such parameteriza5
tion based upon the available data. Fortunately, as in the standard GP regression framework, we can
achieve this in a principled way through maximization of the marginal likelihood (or evidence).
As in the case of the posterior distribution, the marginal likelihood is analytically intractable and
approximations are needed. The Laplace approximation to the marginal log-likelihood is given by:
1
log p(D|?) ? ? log|?W + I| ?
2
M Kj
1 ?T ?1 ? X X
f ? f+
log ?(?
zkj )
2
j=1
(28)
k=1
where z?kj = zkj |?f , ?f and W are defined as in (10) and ? is defined as in equation (7). Note that
computations are not carried out at all the M ? N data-points but only at those locations that
?support? the seen relations and hence we should write e.g. ?fo , ?o where the subindex {}o indicates
this fact. However, for simplicity, we have omitted this notation.
Given the equation above, gradient-based optimization can be used for learning the hyper-parameters
in our model. As we shall see in the following section, for our experiments we do not have much
prior information on suitable hyper-parameter settings and therefore we have carried out hyperparameter learning by maximization of the marginal log-likelihood.
8
Experiments & Results
In this section we describe the dataset used in our experiments, the evaluation setting and the results
obtained with our model and other baseline methods.
8.1
The Sushi Dataset
We evaluate our approach on the Sushi dataset [6]. Here we present a brief description of this dataset
and the pre-processing we have carried out in order to apply our method. The reader is referred to [6]
for more details. The Sushi dataset contains full rankings given by 5000 Japanese users over N = 10
different types of sushi. Each sushi is associated with a set of features which include style, major
group, minor group, heaviness, consumption frequency, normalized price and sell frequency. The
first three features are categorical and therefore we have created the corresponding dummy variables
to be used by our method. The resulting features are then represented by a 15-dimensional vector
(x). Each user is also represented by a set of features wich include gender, age and other features
that compile geographical/regional information. As with the item features, we have created dummy
variables for those categorical features, which resulted into a 85-dimensional feature vector (t) for
each user. As pointed out in the documentation of the dataset, Japanese food preferences are strongly
correlated with geographical and regional information. Therefore, modeling user similarities may
provide useful information during the elicitation process.
8.2
Evaluation Methodology and Experimental Details
We evaluate our method via 10-fold cross-validation, where we have sub-sampled the training folds
in order to (a) keep the computational burden as low as possible and (b) show that we can learn
sensible parameterizations based upon relatively low requirements in terms of the preferences seen
on previous users. In particular, we have subsampled 50 training users and selected about 5 training
pairwise preferences drawn from each of the N = 10 available items.
For the GPs we have used the squared exponential (SE) covariance functions with automatic relevance determination (ARD) for both ?t and ?x and have carried out hyperparameter learning via
gradient-based optimization of the marginal likelihood in equation (28). We have initialized the
hyper-parameters of the models deterministically, setting the signal variance and the length-scales
of the covariance function to the initial values of 1 and the ? 2 parameter to 0.01.
In order to measure the quality of our preference elicitation approach we use the normalized
loss as a function of the number of queries, where at each iteration the method provides a recommendation based on the available information. The normalized loss function is defined as:
(ubest ? upred )/ubest , where ubest is the best utility for a specific item/user and upred is the utility
achieved by the recommendation provided by the system.
6
0.3
RVOI
B&L
GPPE
0.25
GPPE?PRIOR
GPPE?OPT
NORMALIZED AVERAGE LOSS
NORMALIZED AVERAGE LOSS
0.3
0.2
0.15
0.1
0.05
0
0
2
4
6
8
10
NUMBER OF QUERIES
12
14
0.25
0.2
0.15
0.1
0.05
0
0
16
(a)
2
4
6
8
10
NUMBER OF QUERIES
12
14
16
(b)
Figure 1: The Normalized average loss as a function of the number of queries with 2 standard
(errors of the mean) error bars. (a) The performance of our model compared to the RVOI method
described in [7] and the B&L heuristic over the full set of 5000 test users. (b) The performance of our
model when the hyper-parameters have been optimized via maximization of the marginal likelihood
(GPPE-OPT) compared to the same GP elicitation framework when these hyper-parameters have
been set to their default values (GPPE-PRIOR).
We compare our approach to two baseline methods. One is the restricted value of information algorithm [7] and the other one is the best and largest heuristic, which we wil refer to as the RVOI
method and the B&L heuristic respectively. The RVOI approach is also a VOI-based method but it
does not leverage information from other users and it considers diagonal Gaussians as prior models
of the latent utility functions. The B&L heuristic selects the current best item and the one with the
largest uncertainty. Both baselines have been shown to be competitive methods for preference elicitation (see [7] for more details). Additionally, we compare our method when the hyper-parameters
have been learned on the set of previously seen users with the same GP elicitation approach when
the hyper-parameters have been set to the initial values described above. This allows us to show that,
indeed, when prior information on user and item similarity is not available, our model does learn
sensible settings of the hyper-parameters, which lead to better quality elicitation outcomes.
8.3
Results
Figure 1(a) shows the normalized average loss across all 5000 users as a function of the number
of queries. As can be seen, on average, all competing methods reduce the expected loss as the
number of queries increases. More importantly, our method (GPPE) clearly outperforms the other
algorithms even for a small number of queries. This demonstrates that our approach exploits the
inter-relations between users and items effectively in order to enhance the elicitation process on a
new user. Although it may be surprising that the B&L heuristic outperforms the RVOI method, we
point out that the evaluation of these methods presented in [7] did not consider real datasets as we
do in our experiments.
Figure 1(b) shows the normalized average loss across all 5000 users for our method when the hyperparameters have been set to the initial values described in section 8 (labeled in the figure as GPPEPRIOR) and when the hyper-parameters have been optimized by maximization of the marginal likelihood on a set of previously seen users (labeled in the figure as GPPE-OPT). We can see that,
indeed, the GPPE model that learns the hyper-parameters from previous users? data significantly
outperforms the same method when these (hyper-)parameters are not optimized.
9
Related Work
Preference elicitation (PE) is an important component of recommender systems and market research.
Traditional PE frameworks focus on modeling and eliciting a single user?s preferences. We can
categorize different PE frameworks in terms of query types. In [8], the authors propose to model
7
utilities as random variables, and refines utility uncertainty by using standard gamble queries. The
same query type is also used in [9], which differs from [8] in treating PE as a Partially Observable
Markov Decision Process (POMDP). However, standard gamble queries are difficult for users to
respond to, and naturally lead to noisy responses. Simpler query types have also been used for PE.
For example, [7] uses pairwise comparison queries, which are believed to have low cognitive load.
Our work also adopts simple pairwise comparison queries, but it differs from [7] in that it makes use
of users? preferences that have been seen before and does not assume additive independent utilities.
In the machine learning community preference learning has received substantial interest over the past
few years. For example, one the most recent approaches to preference learning is presented in [10],
where a multi-task learning approach to the problem of modeling human preferences is adopted
by extending the model in [11] to deal with preference data. Their model follows a hierarchical
approach based on finite Gaussian processes (GPs), where inter-user similarities are exploited by
assuming that the subjects share a set of hyper-parameters. Their model is different to ours in
that they consider the dual representation of the GPs as they do not generalize over user features.
Furthermore, they do not address the elicitation problem, which is the main concern of this paper.
Extensions of the Gaussian process formalism to model ordinal data and user preferences are given
in [12] and [13]. Both their prior and their likelihood models can be seen as single-user (task)
specifications of our model. In other words, unlike the work of [10], their model (as ours) considers
the function space view of the GPs but, unlike [10] and our approach, they do not address the
multi-task case or generalize across users. More importantly, an elicitation framework for actively
querying the user is not presented in such works.
[14] proposes an active preference learning method for discrete choice data. Their approach is based
on the model in [13]. Unlike our approach they do not leverage information from seen preferences on
previous users and hence their active preference learning process on a new user starts from scratch.
This leads to the problem of either relying on good prior information on the covariance function
or on hyper-parameter updating during the active learning process, which is computationally too
expensive to be used in practice. Additionally, as their concern is on a possibly infinite set of
discrete choices, their approach completely relies upon the expected improvement (EI) measure.
10
Conclusions & Future Work
In this paper we have presented a Gaussian process approach to the problem of preference elicitation.
One of the crucial characteristics of our method is that it exploits user-similarity via a (correlated)
Gaussian process prior over the users? latent utility functions. These similarities are ?learned? from
preferences on previous users. Our method maintains a flexible representation of the user?s latent
utility function, handles uncertainty in a principled manner and allows the incorporation of prior
knowledge from different sources. The required expected value of information computations can be
derived in closed-form to facilitate the selection of informative queries and determine the highest
utility item from the available item set as quickly as possible.
We have shown the benefits of our method on a real dataset of 5000 users with preferences over
10 sushi types. In future work we aim at investigating other elicitation problems such as those
involving a Likert scale [15] where our approach may be effective. The main practical constraint
is that in order to carry out the evaluation (but not the application) of our method on real data we
require the full set of preferences of the users over a set of items.
Our main motivation for the Laplace method is its computational efficiency. However, [10] has
shown that this method is a good approximation to the posterior in the context of the preference learning problem. We intend to investigate other approximation methods to the posterior and
marginal likelihood and their joint application with sparse approximation methods within our framework (see e.g. [16]), which will be required if the number of training users is large.
Acknowledgments
NICTA is funded by the Australian Government as represented by the Department of Broadband,
Communications and the Digital Economy and the Australian Research Council through the ICT
Centre of Excellence program.
8
References
[1] Donald R. Jones. A taxonomy of global optimization methods based on response surfaces.
Journal of Global Optimization, 21(4):345?383, 2001.
[2] R.A. Howard. Information value theory. IEEE Transactions on Systems Science and Cybernetics, 2(1):22?26, 1966.
[3] Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive
utility elicitation. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages
363?369. AAAI Press / The MIT Press, 2000.
[4] Vincent Conitzer. Eliciting single-peaked preferences using comparison queries. Journal of
Artificial Intelligence Research, 35:161?191, 2009.
[5] Edwin V. Bonilla, Kian Ming A. Chai, and Christopher K. I. Williams. Multi-task Gaussian
process prediction. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in
Neural Information Processing Systems 20, pages 153?160. MIT Press, Cambridge, MA, 2008.
[6] Toshihiro Kamishima. Nantonac collaborative filtering: recommendation based on order responses. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge
discovery and data mining, pages 583?588, New York, NY, USA, 2003. ACM.
[7] Shengbo Guo and Scott Sanner. Real-time multiattribute Bayesian preference elicitation with
pairwise comparison queries. In Proceedings of the Thirteenth International Conference on
Artificial Intelligence and Statistics, 2010.
[8] Urszula Chajewska and Daphne Koller. Utilities as random variables: Density estimation
and structure discovery. In Proceedings of the 16th Conference on Uncertainty in Artificial
Intelligence, pages 63?71. Morgan Kaufmann Publishers Inc., 2000.
[9] Craig Boutilier. A POMDP formulation of preference elicitation problems. In Proceedings
of the 18th National Conference on Artificial Intelligence, pages 239?246, Menlo Park, CA,
USA, 2002. American Association for Artificial Intelligence.
[10] Adriana Birlutiu, Perry Groot, and Tom Heskes. Multi-task preference learning with an application to hearing aid personalization. Neurocomputing, 73(7-9):1177?1185, 2010.
[11] Kai Yu, Volker Tresp, and Anton Schwaighofer. Learning Gaussian processes from multiple
tasks. In Proceedings of the 22nd international conference on Machine learning, pages 1012?
1019, New York, NY, USA, 2005. ACM.
[12] Wei Chu and Zoubin Ghahramani. Gaussian processes for ordinal regression. Journal of
Machine Learning Research, 6:1019?1041, 2005.
[13] Wei Chu and Zoubin Ghahramani. Preference learning with Gaussian processes. In Proceedings of the 22nd international conference on Machine learning, pages 137?144, New York,
NY, USA, 2005. ACM.
[14] Brochu Eric, Nando De Freitas, and Abhijeet Ghosh. Active preference learning with discrete
choice data. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural
Information Processing Systems 20, pages 409?416. MIT Press, Cambridge, MA, 2008.
[15] Rensis Likert. A technique for the measurement of attitudes.
22(140):1?55, 1932.
Archives of Psychology,
[16] Joaquin Qui?nonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6:1939?1959,
2005.
9
| 4141 |@word exploitation:1 version:1 manageable:1 itrue:2 nd:2 twelfth:1 covariance:11 carry:1 initial:3 contains:1 ours:2 outperforms:3 past:1 freitas:1 current:2 com:1 surprising:1 chu:2 ronald:1 refines:1 additive:1 informative:2 remove:1 treating:1 update:3 intelligence:7 selected:1 item:42 parameterization:1 provides:3 parameterizations:1 location:3 preference:52 simpler:1 daphne:2 zkj:6 qij:5 manner:4 excellence:1 x0:7 pairwise:7 indeed:2 market:1 inter:2 expected:15 multi:6 relying:1 ming:1 food:1 actual:1 provided:3 notation:2 maximizes:1 what:2 argmaxf:2 voi:2 ghosh:1 act:1 interactive:1 k2:2 demonstrates:1 platt:2 normally:1 conitzer:1 appear:1 t1:1 shengbo:3 before:2 sushi:7 au:1 evoi:7 xbest:1 compile:1 locked:1 seventeenth:1 practical:2 acknowledgment:1 practice:2 differs:2 mei:6 elicit:3 thought:1 significantly:1 dictate:2 word:3 pre:1 donald:1 zoubin:2 get:1 selection:3 operator:1 context:1 imposed:1 chajewska:2 williams:1 wich:1 pomdp:2 simplicity:2 importantly:2 his:1 handle:3 laplace:5 user:102 gps:6 us:1 carl:1 documentation:1 expensive:1 particularly:1 updating:2 labeled:2 observed:4 highest:3 mentioned:2 principled:7 substantial:1 wil:1 ideally:1 predictive:4 dilemma:1 upon:4 efficiency:1 eric:1 completely:1 edwin:3 joint:4 represented:3 attitude:1 describe:2 effective:1 argmaxi:1 query:33 artificial:7 hyper:22 outcome:1 whose:1 heuristic:6 kai:1 otherwise:1 ability:1 statistic:1 unseen:3 gp:13 noisy:1 ip:1 advantage:1 analytical:1 propose:3 product:2 nonero:1 achieve:2 roweis:2 description:1 chai:1 empty:1 requirement:3 extending:1 ard:1 minor:1 received:1 edward:1 predicted:1 trading:1 australian:2 exploration:1 human:1 australia:1 nando:1 require:3 government:1 generalization:2 opt:3 extension:1 hold:1 considered:1 normal:3 parr:1 major:1 smallest:1 xk2:3 omitted:2 purpose:1 estimation:1 bag:1 council:1 largest:2 mit:3 clearly:1 gaussian:20 aim:4 rather:1 avoid:1 factorizes:1 volker:1 derived:2 focus:4 she:1 improvement:7 likelihood:16 fq:1 nantonac:1 indicates:1 sigkdd:1 baseline:3 wf:1 economy:1 initially:1 her:1 relation:6 koller:4 expand:1 selects:1 issue:1 dual:1 flexible:4 proposes:1 integration:1 marginal:10 once:1 having:1 sell:1 park:1 jones:1 yu:1 peaked:1 future:2 few:1 resulted:1 national:2 neurocomputing:1 subsampled:1 maintain:2 interest:1 investigate:1 mining:1 evaluation:4 personalization:1 kt:6 heaviness:1 integral:1 initialized:1 column:1 modeling:4 formalism:1 asking:1 maximization:5 introducing:1 hearing:1 too:1 straightforwardly:1 dependency:5 density:2 geographical:2 international:4 preferring:1 off:1 enhance:1 quickly:2 squared:1 aaai:1 satisfied:1 containing:1 possibly:1 cognitive:2 american:1 style:1 actively:2 account:1 de:1 inc:1 bonilla:3 explicitly:2 ranking:1 tion:1 view:2 closed:2 candela:1 competitive:1 start:1 maintains:1 collaborative:1 variance:3 characteristic:2 descriptor:1 kaufmann:1 generalize:2 anton:1 bayesian:9 vincent:1 craig:1 drive:2 cybernetics:1 fo:1 frequency:2 involved:1 naturally:1 associated:1 di:1 gain:1 sampled:1 dataset:8 rational:1 ask:1 knowledge:6 cj:1 urszula:2 brochu:1 back:2 methodology:1 response:9 tom:1 wei:2 formulation:2 evaluated:1 strongly:1 furthermore:1 xk1:3 until:1 joaquin:1 ei:5 christopher:1 perry:1 quality:2 facilitate:2 omitting:1 usa:4 concept:2 true:8 normalized:8 hence:8 analytically:2 deal:1 attractive:1 conditionally:1 during:3 unnormalized:1 pdf:1 theoretic:1 fj:1 reasoning:1 meaning:1 consideration:1 fi:1 association:1 he:1 approximates:1 refer:2 measurement:1 cambridge:2 vec:1 automatic:1 heskes:1 pointed:1 centre:1 funded:1 specification:1 similarity:10 surface:2 posterior:9 recent:1 optimizing:1 arbitrarily:1 exploited:1 seen:14 morgan:1 fortunately:1 determine:3 signal:1 multiple:2 full:3 characterized:2 determination:1 cross:1 mnemonic:2 believed:1 concerning:1 qi:1 prediction:1 involving:4 regression:4 df:4 iteration:2 represent:1 achieved:1 thirteenth:1 addressed:1 adriana:1 else:1 source:2 crucial:2 publisher:1 regional:2 unlike:3 archive:1 subject:1 elegant:1 facilitates:1 effectiveness:1 near:1 leverage:2 likert:2 xj:3 psychology:1 competing:1 reduce:3 idea:1 tm:2 regarding:1 t0:1 expression:2 utility:39 hessian:1 york:3 prefers:1 ignored:1 useful:1 subindex:1 se:1 boutilier:1 tune:1 nonparametric:1 kian:1 dummy:2 multiattribute:1 write:1 hyperparameter:2 shall:1 discrete:3 group:2 key:1 achieving:1 drawn:3 year:1 uncertainty:7 powerful:1 respond:1 place:1 throughout:1 reader:1 draw:2 decision:5 prefer:1 qui:1 entirely:1 def:1 fold:2 incorporation:2 deficiency:1 kronecker:1 constraint:1 x2:6 aspect:1 innovative:1 wpq:1 relatively:1 department:1 across:3 making:4 restricted:1 computationally:1 equation:12 previously:5 needed:1 ordinal:2 singer:2 end:2 adopted:1 available:8 gaussians:1 apply:2 hierarchical:1 denotes:3 include:3 marginalized:1 newton:1 unifying:1 exploit:3 k1:2 ghahramani:2 eliciting:2 objective:1 intend:1 strategy:1 parametric:1 diagonal:1 traditional:1 amongst:1 gradient:2 sensible:2 consumption:1 me:1 considers:3 nicta:3 assuming:1 length:1 relationship:2 difficult:1 taxonomy:1 negative:1 unknown:1 recommender:1 datasets:1 markov:1 howard:1 finite:1 defining:1 extended:1 communication:1 ninth:1 community:1 pair:4 required:3 optimized:3 learned:8 address:5 able:1 elicitation:30 bar:1 usually:1 scott:3 groot:1 fp:1 program:1 including:1 max:2 belief:2 suitable:1 indicator:1 sanner:3 brief:1 created:2 carried:4 categorical:2 tresp:1 kj:7 prior:20 literature:1 ict:1 discovery:2 loss:8 filtering:1 querying:2 age:1 validation:1 digital:1 integrate:1 principle:1 editor:2 share:1 summary:1 repeat:1 rasmussen:1 jth:2 allow:2 sparse:2 distributed:1 benefit:1 default:1 xn:1 cumulative:2 author:1 commonly:1 reinforcement:1 adopts:1 adaptive:1 correlate:1 transaction:1 approximate:2 observable:1 ignore:1 relatedness:1 implicitly:1 keep:2 global:2 active:4 investigating:1 assumed:2 recommending:2 xi:2 alternatively:1 latent:14 iterative:1 additionally:3 toshihiro:1 learn:3 nature:3 ca:1 menlo:1 obtaining:2 japanese:2 did:1 main:7 s2:1 noise:1 hyperparameters:2 motivation:1 x1:7 canberra:1 referred:1 broadband:1 ny:3 aid:2 sub:1 deterministically:1 exponential:1 candidate:2 pe:15 learns:1 load:2 specific:3 xt:1 emphasized:1 maxi:1 list:1 evidence:5 concern:2 burden:3 intractable:2 adding:1 effectively:1 ci:2 anu:1 kx:3 demand:1 generalizing:1 simply:2 schwaighofer:1 partially:1 recommendation:5 gender:1 corresponds:1 relies:1 kamishima:1 cdf:2 ma:2 acm:4 conditional:2 price:1 specifically:1 infinite:1 called:1 discriminate:1 experimental:1 gamble:2 select:2 support:2 guo:3 categorize:1 relevance:1 incorporate:5 evaluate:2 scratch:1 correlated:3 |
3,470 | 4,142 | Trading off Mistakes and Don?t-Know Predictions
Amin Sayedi?
Tepper School of Business
CMU
Pittsburgh, PA 15213
[email protected]
Morteza Zadimoghaddam?
CSAIL
MIT
Cambridge, MA 02139
[email protected]
Avrim Blum?
Department of Computer Science
CMU
Pittsburgh, PA 15213
[email protected]
Abstract
We discuss an online learning framework in which the agent is allowed to say ?I
don?t know? as well as making incorrect predictions on given examples. We analyze the trade off between saying ?I don?t know? and making mistakes. If the
number of don?t-know predictions is required to be zero, the model reduces to
the well-known mistake-bound model introduced by Littlestone [Lit88]. On the
other hand, if no mistakes are allowed, the model reduces to KWIK framework
introduced by Li et. al. [LLW08]. We propose a general, though inefficient, algorithm for general finite concept classes that minimizes the number of don?t-know
predictions subject to a given bound on the number of allowed mistakes. We then
present specific polynomial-time algorithms for the concept classes of monotone
disjunctions and linear separators with a margin.
1 Introduction
Motivated by [KS02, KK99] among others, Li, Littman and Walsh [LLW08] introduced the KWIK
framework for online learning, standing for knows what it knows. Roughly stated, in the KWIK
model, the learning algorithm is required to make only accurate predictions, although it can opt
out of predictions by saying ?I don?t know?(?). After predicting (or answering ?) it is then told
the correct answer. The algorithm is not allowed to make any mistakes; still, it learns from those
examples on which it answers ?. The goal of the algorithm is to minimize the number of examples
on which it answers ?. Several aspects of the model are discussed in [LLW08], and there are many
other papers, including [WSDL, DLL09, SL08], using the framework. It is worth mentioning that
the idea of forcing the algorithm to say ?I don?t know? instead of making a mistake has also appeared
in earlier work such as [RS88], and referred to as reliable learning.
Generally, it is highly desirable to have an algorithm that learns a concept in the KWIK framework
using a few, or even polynomial, number of ?s. But unfortunately, for many concepts, no such
algorithm exists. In fact, it turns out that even for many basic classes which are very easy to learn in
the Mistake-bound model [Lit88], e.g. the class of singletons or disjunctions, the KWIK algorithm
needs to say ? exponentially many times. The purpose of our paper is to relax the assumption of
not making any mistakes, by allowing a few mistakes, to get much better bounds on the number of
?s. Or, in the other direction, our aim is to produce algorithms that can make substantially fewer
mistakes than in the standard Mistake-Bound model, by trading off some of those for (presumably
less costly) don?t-know predictions.
In [LLW08], the authors show, through a non-polynomial time enumeration algorithm, that a finite
class H of functions can be learned in the KWIK framework with at most |H| ? 1 number of ?s.
?
Part of this work was done when the author was an intern in Microsoft Research New England, MA.
Part of this work was done when the author was an intern in Microsoft Research Cambridge, UK.
?
This work was supported in part by NSF grant CCF-0830540.
?
1
p
We show that if only one mistake is allowed, that number can be reduced to 2|H|. Furthermore,
we show that the problem is equivalent to the famous egg-dropping puzzle, defined formally in
1
Section 2, hence getting bound (k + 1)H k+1 when k mistakes are allowed. Our algorithm does not
in general run in polynomial time in the description length of the target function since its running
time depends on |H|; however, we propose polynomial versions of our algorithm for two important
classes: monotone disjunctions and linear separators.
Allowing the algorithm to make mistakes in the KWIK model is equivalent to allowing the algorithm
to say ?I don?t know? in the Mistake-bound model introduced in [Lit88]. In fact, one way of looking
at the algorithms presented in section 3 is that we want to decrease the number of mistakes in
Mistake-bound model by allowing the algorithm to say ?. The rest of the paper is structured as
follows. First we define the model and describe the limits of KWIK model. Then in section 2, we
describe how would the bounds on the number of ?s change if we allow a few mistakes in KWIK
model. Finally, we give two polynomial algorithms for important classes, Monotone Disjunctions
and Linear Separators with a margin, in Section 3.
1.1 Model
We want to learn a concept class H consisting of functions f : X ? {+, ?}. In each stage, the
algorithm is given an example x ? X and is asked to predict the target function h? (x), where we
assume h? ? H. The algorithm might answer, +, ? or ? representing ?I don?t know?. After the
prediction, even if it is ?, the value of h? (x) is revealed to the algorithm. For a given integer k, we
want to design an algorithm such that for any sequence of examples, the number of times M that it
makes a mistake is not more than k, and the number of times I that it answers ? is minimized.
For example, the special case of k = 0 is equivalent to the KWIK framework. Also, if k ? log(|H|),
the majority vote algorithm can learn the class with no ? responses, i.e. I = 0.
Since we want to derive worst-case bounds, we assume that the sequence of the examples, as well as
the target function h? are selected by an adversary. The adversary sends the examples one by one.
For each example x ? X, our algorithm decides what to answer; then, the adversary reveals h? (x).
1.2 The KWIK Model
Although the idea of the KWIK framework is quite useful, there are very few problems that can be
solved effectively in this framework. The following example demonstrates how an easy problem in
the Mistake-bound model can turn into a hard problem in the KWIK model.
Example 1 Suppose that H is the class of singletons. In other words, for hi ? H, where hi :
{0, 1}n ? {?, +}, we have hi (x) = + if x is the binary representation of i, and hi (x) = ?
otherwise. Class H can be learned in Mistake-bound model with mistake bound of only 1. The
algorithm simply predicts ? on all examples until it makes a mistake. As soon as the algorithm
makes a mistake, it can easily figure out what the target function is.
However, class H needs exponentially many ??s in the KWIK framework to be learned. Since the
algorithm does not know the answer until it has seen its first positive example, it must keep answering
? on all examples that it has not seen yet. Therefore, in the worst case, it answers ? and finds out
that the answer is ? on all the first 2n ? 1 examples that it sees.
The situation in Example 1 happens for many other classes of functions, e.g. conjunctions or disjunctions, as well.
Next, we review an algorithm (called the enumeration algorithm in [LLW08]) for solving problems
in the KWIK framework. This algorithm is the main ingredient of most of the algorithms proposed
in [LLW08].
Algorithm 1 Enumeration
The algorithm looks at all the functions in class H; if they all agree on the label of the current
example x ? X, the algorithm outputs that label, otherwise the algorithm outputs ?. Upon receiving
the true label of x, the algorithm then removes from H those functions h that answered incorrectly
2
on x and continues to the next example. Note that at least one function gets removed from H each
time that algorithm answers ?; therefore, the algorithm finds the target function with at most |H|?1
number of ??s.
2 The KWIK Model with Mistakes
Example 1 shows how hard it can be to learn in the KWIK model. To address this, we give the
following relaxation of the framework that allows concepts to be learned much more effectively and
at the same time preserves the original motivation of the KWIK model?it?s better saying ?I don?t
know? rather than making a mistake.
Specifically, we allow the algorithm to make at most k mistakes. Even for very small values of k,
this can allow us to get much better bounds on the number of times that the algorithm answers ?.
For example, by letting k = 1, i.e. allowing one mistake, the number of ??s decreases from 2n ? 1
to 0 for the class of singletons. Of course, this case is not so interesting since k = 1 is the mistake
bound for the class. Our main interest is the case that k > 0 and yet is much smaller than the number
of mistakes needed to learn in the pure Mistake Bound model.
We saw, in Algorithm 1, how to learn a concept class H with no mistakes and with O(|H|) number
of ??s. In many cases, O(|H|) is tight; in fact, if for every subset S ? H with |S| > 1 there exists
some x ? X for which |{h ? S|h(x) = +}| ? {1, |S| ? 1}, then the bound is tight. This condition,
for example, is satisfied by the class of intervals: that is, H = {[0, a] : a ? {0, 1, 2, . . . , 2n ? 1}}.
However, if we
pallow the algorithm to make one mistake, we show that the number of ??s can be
reduced to O( |H|). In general, if k mistakes are allowed, there is an algorithm that can learn any
class H with at most (k + 1)|H|1/k+1 number of ??s. The algorithm is similar to the one for the
classic ?egg game? puzzle (See [GF]). First suppose that k = 1. We make a pool of all functions
in H, initially consisting of |H| candidates. Whenever an example arrives, we see how many
p of the
candidates label it +, and how many label it ?. If the population of the minority is < |H|, we
predict
p the label that the majority gives on the example; however, if the population of the minority
is ? |H|, we say ?. Those functions that predict incorrectly on an example are removed from the
pool in each step; so the pool is just the currentpversion space. If we make a mistake in some step,
the size of the version space
p will reduce to < |H|. Hence, using Algorithm 1, we can complete
the learning with at most |H| number of additional
p??s after our first mistake. Furthermore, note
that before making any mistake, we remove at least |H| of the functions
from the pool each time
p
we answer ?. Therefore, the total number of ??s cannot exceed 2 |H|. This technique can be
generalized for k mistakes, but first we mention a connection between this problem and the classic
?egg game? puzzle.
Example 2 Egg Game Puzzle
You are given 2 identical eggs, and you have access to a n-story building. The eggs can be very hard
or very fragile or anywhere in between: they may break if dropped from the first floor or may not
break even if dropped from the n-th floor. You need to figure out the highest floor from which an egg
can be dropped without breaking. The question is how many drops you need to make. Note that you
can break only two eggs in the process.
?
The answer to this puzzle is 2n up to an additive constant. In fact, a thorough analysis of the puzzle
when there are k eggs available, instead of just two eggs, is given in [GF]. The ? minimization
problem when k mistakes are allowed is clearly related to the egg game puzzle when the building
has |H| floors and there
p are k + 1 eggs available. As a result, with a slightly smarter algorithm that
adjusts the threshold
p
p|H| recursively each time an example arrives, we can decrease the number
of ?s from 2 |H| to 2|H|.
Algorithm 2 Learning in the KWIK Model with at most k Mistakes
k
Let s = |H| k+1 , and let P denote the current version space: the pool of all functions that might still
be the target. Initially P = H, but during the learning process, we remove functions from P . For
each example that arrives, examine how many functions in P label it + and how many label it ?. If
3
the minority population is > s, we answer ?, otherwise, we answer the majority prediction. At the
end of each step, we remove the functions that made a mistake in the last step from P . Whenever we
k?i
make a mistake, we update s = |P | k+1?i , where i is the number of mistakes we have made so far.
Proposition 1 Algorithm 2 learns a concept class H with at most k mistakes and (k + 1)|H|1/k+1
?I don?t know?s.
k
Proof: After the first mistake, the size of the pool reduces to < |H| k+1 . Hence, using induction, we
k
can argue that after the first mistake, the learning can be done with k ? 1 mistakes and k(|H| k+1 )1/k
?I don?t know?s. There can exist at most |H|k = |H|1/k+1 number of ??s before the first mistake.
|H| k+1
Therefore, the total number of ??s will not exceed
k
|H|1/k+1 + k(|H| k+1 )1/k = (k + 1)|H|1/k+1 .
2
Before moving to the next section, we should mention that Algorithm 2 is not computationally
efficient. Particularly, if H contains exponentially many functions in the natural parameters of the
problem, which is often the case, the running time of Algorithm 2 becomes exponential. In the next
section, we give polynomial-time algorithms for two important concept classes.
3 The Mistake Bound Model with ?I don?t know? predictions
We can look at the problem from another perspective: instead of adding mistakes to the KWIK
framework, we can add ?I don?t know? to the Mistake Bound model. In many cases, we prefer our
algorithm saying ?I don?t know? rather than making a mistake. Therefore, in this section, we try to
improve over optimal mistake bounds by allowing the algorithm to use a modest number of ??s, and
in general to consider the tradeoff between the number of mistakes and the number of ??s. Note that
an algorithm can always replace its ??s with random +?s and ??s, therefore, we must expect that
decreasing the number of mistakes by one requires increasing the number of ??s by at least one.
3.1 Monotone Disjunctions
We start with the concept class of Monotone Disjunctions. A monotone disjunction is a disjunction
in which no literal appears negated, that is, a function of the form
f (x1 , . . . , xn ) = xi1 ? xi2 ? . . . ? xik .
Each example is a boolean vector of length n, and an example is labeled + if and only if at least
one of the variables that belong to the target function is set to 1 in the example. We know that this
class can be learned with at most n mistakes in Mistake-bound Model [Lit88] where n is the total
number of variables. This class is particularly interesting because results derived about monotone
disjunctions can be applied to other classes as well, such as general disjunctions, conjunctions, and
k-DNF formulas. We are interested in decreasing the number of mistakes at the cost of having
(hopefully few) ??s.
First, let?s not worry about the running time and see how well Algorithm 2 performs here. We have
i
|H| = 2n ; if we let k = n/i, the bound that we get on the number of ??s will be ? n2i ; this is not
bad, especially, for the case of small i, e.g. i = 2, 3. In fact, for the case of i = 2, we are trading off
each mistake for four ?I don?t know?s. But unfortunately, Algorithm 2 cannot do this in polynomial
time. Our next goal is to design an algorithm which runs in polynomial time and guarantees the
same good bounds on the number of ??s.
Algorithm 3 Learning Monotone Disjunctions with at most n/2 Mistakes
Let P , P + and P ? be three pools of variables. Initially, P = {x1 , . . . , xn } and P + = P ? = ?.
During the process of learning, the variables will be moved from P to P ? or P + . The pool P +
is the set of variables that we know must exist in the target function; the pool P ? is the set of the
4
variables that we know cannot exist in the target function. The learning process finishes by the time
that P gets empty.
In each step, an example x arrives. Let S ? {x1 , . . . , xn } be the set representation of x, i.e., xi ? S
if and only if x[i] = 1. If S ? P + 6= ?, we can say for sure that the example is +. If S ? P ? ,
we can say for sure that the example is negative. Otherwise, it must be the case that S ? P 6= ?,
and we cannot be sure about our prediction. Here, if |S ? P | ? 2 we answer +, otherwise, i.e. if
|S ? P | = 1, we answer ?.
If we make a mistake, we move S ? P to P ? . Every time we answer ?, we move S ? P to P + or
P ? depending on the correct label of the example.
Proposition 2 Algorithm 3 learns the class of Monotone Disjunctions with at most M ? n/2 mistakes and n ? 2M number of ?s.
Proof: If we make a mistake, it must be the case that the answer had been negative while we
answered positive; for this to happen, we must have |S ? P | ? 2. So, after a mistake, we can move
S ? P to P ? . The size of P , therefore, decreases by at least 2.
Every time we say ?, it must be the case that |S ? P | = 1. Therefore, the label of the example is
positive iff S ? P is contained in the target function, and so the algorithm correctly moves S ? P to
P + or P ? . Additionally, the size of P decreases by at least one on each ? prediction. 2
Algorithm 3, although very simple, has an interesting property. If in an online learning setting,
saying ? is cheaper than making a mistake, Algorithm 3 strictly dominates the best algorithm in
Mistake-bound model. Note that the sum of its ?s and its mistakes is never more than n. More
precisely, if the cost of making a mistake is 1 and the cost of saying ? is < 1, the worst-case cost of
this algorithm is strictly smaller than n.
Next we present an algorithm for decreasing the number of mistakes to n/3.
Algorithm 4 Learning Monotone Disjunctions with at most n/3 Mistakes
Let P , P + , P ? be defined as in Algorithm 3. We have another pool P ? which consists of pairs of
variables such that for each pair we know at least one of the variables belongs to the target function.
As before, the pools form a partition over the set of all variables. In addition, a variable can belong
to at most one pair in P ? . Thus, any given variable is either in a single pair of P ? or else in exactly
one of the sets P , P + , or P ? .
Whenever an example x arrives we do the following. Let S ? {x1 , . . . , xn } be the set representation
of x, i.e. xi ? S if and only if x[i] = 1. If S ? P + 6= ?, we answer +. If S ? P ? , we answer ?.
Also, if S contains both members of a pair in P ? , we can say that the label is +.
If none of the above cases happen, we cannot be sure about our prediction. In this case, if |S ? P | ?
3, we answer +. If |S ? (P ? P ? )| ? 2 and |S ? P ? | ? 1 we again answer +. Otherwise, we answer
?. Description of how the algorithm moves variables between sets upon receipt of the correct label
is given in the proof below.
Proposition 3 Algorithm 4 learns the class of Monotone Disjunction with at most M ? n/3 mistake
and 3n/2 ? 3M number of ??s.
Proof: If |S ? P | ? 3 and we make a mistake on S, then the size of P will be reduced by at least 3,
and the size of P ? will increase by at least 3. If |S ? (P ? P ? )| ? 2 and |S ? P ? | ? 1 and we make
a mistake on S, then at least two variables will be moved from (P ? ? P ) to P ? , and at least one
variable will be moved from P ? to P + (since whenever a variable moves from P ? to P ? , the other
variable in its pair should move to P + ). Therefore, the size of P ? ? P + will increase by at least 3.
Since P ? ? P + ? n, we will not make more than n/3 mistakes.
There are three cases in which we may answer ?. If |S ? P | = 0 and |S ? P ? | = 1, we answer ?;
however, after knowing the correct label, S ? P ? will be moved to P + or P ? . Therefore, the number
of ?I don?t know?s of this type is bounded by n ? 3M . If |S ? P | = 1 and |S ? P ? | = 0, again,
after knowing the correct label, S ? P will be moved to P + or P ? , so the same bound applies. If
|S ? P | = 2 and |S ? P ? | = 0, the correct label might be + or ?. If it is negative, then we can move
5
S ? P to P ? and use the same bound as before. If it is positive, the two variables in S ? P will be
moved to P ? as a pair. Note that there can be at most n/2 of such ??s; therefore, the total number
of ??s cannot exceed n/2 + n ? 3M . 2
3.2 Learning Linear Separator Functions
In this section, we analyze how we can use ? predictions to decrease the number of mistakes for
efficiently learning linear separators with margin ?. The high level idea is to use the basic approach
of the generic algorithm in Section 2 for finite H, but rather than explicitly enumerating over functions, to instead efficiently estimate the measure of the functions in the version space that predict +
versus those that predict ? and to make prediction decisions based on the result.
Setting: We assume a sequence S of n d-dimensional examples arrive one by one, and that these
examples are linearly separable: there exists a unit-length separator vector w? such that w? ? x > 0 if
?
and only if x is a positive example. Define ? to be minx?S |w|x|?x| . For convenience, we will assume
that all examples have unit length.
Below, we show how to formulate the problem with a Linear Program to bound the number of
mistakes using some ?I don?t know? answers.
Assume that n examples x1 , x2 , ? ? ? , xn are in the sequence S. These points arrive one at a time and
we have to answer when a point arrives. The objective is to make a small number of mistakes and
some ?I don?t know? answers to find a separation vector w such that w ? xi is positive if and only if
xi is a + point. We can formulate the following linear program using this instance (this sequence of
points).
w ? xi > 0 If xi is a + instance, and
w ? xi ? 0 If xi is a ? instance
Note that there are d variables which are the coordinates of vector w, and there are n linear constraints one per input point. Clearly we do not know which points are the + points, so we can not
write this linear program explicitly and solve it. But the points arrive one by one and the constraints
of this program are revealed over the time. Note that if a vector w is a feasible solution of the above
linear program, any positive multiple of w is also a feasible solution. In order to make the analysis
easier and bound the core (the set of feasible solutions of the?linear program),
we can assume that
?
the coordinates of the vector w are always in range [?1 ? ?/ d, 1 + ?/ d]. We can add 2d linear
constraints to make sure that the coordinates?do not violate these
? properties. We will see later why
we are choosing the bounds to be ?(1 + ?/ d) and 1 + ?/ d.
Now assume that we are at the beginning and no point has arrived. So we do not have any of the
n constraints
related
to points. The core of the linear program is the set of vectors w in?[?1 ?
?
?
?/ d, 1 + ?/ d]d at the beginning. So we have a core (feasible set) of volume (2 + 2?/ d)d at
first. For now assume that we can not use the ?I don?t know? answers. We show how to use them
later. The first point arrives. There are two possibilities for this point. It is either a + point or a ?
point. If we add any of these two constraints to the linear program, we obtain a more restricted linear
program with a core of lesser volume. So we obtain one LP for each of these two possibilities, and
the sum of the volumes of the cores of these two linear programs is equal to the volume of the core
of our current linear program. We will show how to compute these volumes, but for now assume
that they are computed. If the volume of the linear program for the + case is larger than the ? case,
we answer +. If our answer is true, we are fine, and we have passed the query with no mistake.
Otherwise we have made a mistake, but the volume of the core of our linear program is halved. We
do the same for the ? case as well, i.e. we answer ? when the larger volume is for ? case.
Now there are two main issues we have to deal with. First of all, we have to find a way to compute
the volume of the core of a linear program. Secondly, we have to find a way to bound the number of
mistakes.
6
In fact computing the volume of a linear program is #P -hard [DF88]. There exists a randomized
polynomial time algorithm that approximates the volume of the core of a linear program with (1 + ?)
approximation [DFK91], i.e. the relative error is ?. The running time of this algorithm is polynomial
in n, d, and 1/?. We can use this algorithm to get estimates of the volumes of the linear programs
we need. But note that we really do not need to know the volumes of these linear programs. We
just need to know whether the volume of the linear program of the + case is larger or the ? case
is larger or if they are approximately equal. Lovasz and Vempala present a faster polynomial time
algorithm for sampling a uniformly random point in the core of a linear program in [LV06]. One
way to estimate the relative volumes of both sides is to sample a uniformly random point from the
core of our current linear program (without taking into account the new arrived point), and see if the
sampled point is in the + side or the ? side. If we sample a sufficiently large number of points (here
2 log(n)/?2 is large enough), and if the majority of them are in the + (?) side, we can say that the
volume of the linear program for + (?) case is at least a 12 ? ? fraction of our current linear program
with high probability. So we can answer based on the majority of these sampled points, and if we
make a mistake, we know that the volume of the core of the linear program is multiplied by at most
1 ? ( 12 ? ?) = 21 + ?.
Suppose we have already processed the first l examples and now the l + 1st example arrives. We
have the linear program with the first l constraints. We sample points from the core of this linear
program, and based on the majority of them we answer + or ? for this new example. Using the
following Theorem, we can bound the number of mistakes.
1
), for every mistake we make in our algorithm, the volume
Lemma 4 With high probability (1? n?(1)
of the core of the linear program decreases by a factor of ( 12 + ?).
Proof: Without loss of generality, assume that we answered +, but the correct answer was ?. So we
sampled 2 log n/?2 functions uniformly at random from the core, and the majority of them were predicting positive. If less than a 12 ? ? fraction of the volume was indeed predicting positive, each sampled point would be from the positive-predicting part with probability less than 12 ??. So the expected
number of positive sampled points would be less than ( 12 ? ?)(2 log n/?2 ) = (log n/?2 ? 2 log n/?).
Therefore, by Chernoff bounds, the chance of the sample having a majority of positive-predicting
2
2
functions would be at most e?(2 log n/?) /2(log n/? ?2 log n/?) = e?2 log n/(1??) = n?2/(1??) . Since
there are n examples arriving, we can use the union bound to bound the probability of failure on any
of these rounds: the probability that the volume of the core of our linear program is not multiplied by
1
at most 21 + ? on any mistakes is at most n ? n?2/(1??) = n1/(1??)
. Therefore with high probability
1
(at least 1 ? n1/(1??) ), for every mistake we make, the volume of the core is multiplied by at most
1
2
2 + ?.
Now we show that the core of the linear program after adding all n constraints (the constraints of
the variables) should have a decent volume in terms of ?.
?
Lemma 5 If there is a unit-length separator vector w? with minx?S w|x|?x = ?, the core of the
?
complete linear program after adding all n constraints of the points has volume at least (?/ d)d .
Proof: Clearly w??is in the
Consider a vector w? whose all coordinates
? core of our linear program.
?
?
are in range (??/ d, ?/ d). We claim that (w + w ) is a correct separator. Consider a point xi .
Without loss of generality assume that it is a + point. So w? ? xi is at least ? ??
|xi |. We
?also know that
|w? ? xi | is at least ?|w? | ? |xi |. Since all its d coordinates are in range (??/ d, ?/ d), we can say
?
that |w? | is less than ?. So (w? + w? )?xi = w? ?xi + w?
?xi > ?|x?
i |? ?|xi | is positive. We also know
?
?
that the coordinates of w + w are in range (?1 ? ?/ d, 1 + ?/ d) because w? has unit
? length
?(so
all its coordinates are between ?1 and 1), and the coordinates of w? are in range (??/ d, ?/ d).
Therefore ?
all vectors of form w? + w? are in the core. We conclude that the volume of the core is at
least (2?/ d)d . 2
Lemmas 4 and 5 give us the following folklore theorem.
Theorem 6 The ?total number of mistakes
in the above algorithm is not more than
?
(2+2?/ d)d
(1+?/ d)d
?
?
log2/(1+?) (2?/ d)d = log2/(1+?) (?/ d)d = O(d(log d + log 1/?)).
7
Proof: The proof easily follows from Lemmas 4 and 5.
2
Now we make use of the ?I don?t know? answers to reduce the number
? of mistakes. Assume that we
do not want to make more than k mistakes. Define Y1 to be (2+2?/ d)d which is the volume of
? thed
core at the beginning before adding any of the constraints of the points. Define Y2 to be (2?/ d)
which is a lower bound for the volume of the core after adding all the constraints of the points. Let
R be the ratio YY12 . In the above algorithm, we do not make more than log2/(1+?) R mistakes.
We want to use ?I don?t know? answers to reduce this number of mistakes. Define C to be R1/k .
Let V, V1 , and V2 be the volumes of the cores of the current linear program, the linear program with
the additional constraint that the new point is a + point, and the linear program with the additional
constraint that the new point is a ? point respectively. If V1 /V is at most 1/C, we can say that the
new point is a ? point. In this case, even if we make a mistake the volume of the core is divided
by at least C, and by definition of C, this can not happen more than logC R = k times. Similarly,
if V2 /V is at most 1/C, we can say the new point is a + point. If V1 /V and V2 /V are both greater
than 1/C, we answer ?I don?t know?, and we know that the volume of the core is multiplied by at
most 1 ? 1/C.
Since we just need to estimate the ratios V1 /V and V2 /V , and in fact we want to see if any of them
is smaller than 1/C or not, we can simply sample points from the core of our current linear program.
But we have to sample at least O(C log n) points to be able to have reasonable estimates with high
probability for these two specific tests (to see if V1 /V or V2 /V is at least 1/C). For example if we
sample 16C log n points, and there are at most 8 log n + points among them, we can say that V1 /V
2
is at most 1/C with probability at least 1 ? e?64 log n/32 log n = 1 ? n12 . But if there are at least
8 log n + points, and 8 log n ? points among the samples, we can say that both V1 /V and V2 /V are
1
at least 8C
with high probability using Chernoff bounds.
If we make a mistake in this algorithm, the volume of the core is divided by at least C, so we do not
make more than k mistakes. We also know that for each ?I don?t know? answer the volume of the
1
, so after 8C ?I don?t know? answers the volume of the core is
core is multiplied by at most 1 ? 8C
multiplied by at most 1/e. Therefore there are at most O(C log R) ?I don?t know? answers. This
completes the proof of the following theorem.
Theorem 7 For any k > 0, we can learn a linear separator of margin ? in ?d using the above
algorithm
with k mistakes and O(R1/k ? log R) ?I don?t know? answers, where R is equal to
?
(1+?/ d)d
?
.
(?/ d)d
4 Conclusion
We have discussed a learning framework that combines the elements of the KWIK and mistakebound models. From one perspective, we are allowing the algorithm to make mistakes in the KWIK
model. We showed, using a version-space algorithm and through a reduction to the egg-game puzzle,
that allowing a few mistakes in the KWIK model can significantly decrease the number of don?tknow predictions.
From another point of view, we are letting the algorithm say ?I don?t know? in the mistake-bound
model. This can be particularly useful if don?t-know predictions are cheaper than mistakes and
we can trade off some number of mistakes for a not-too-much-larger number of ?I don?t know?s.
We gave polynomial-time algorithms that effectively reduce the number of mistakes in the mistakebound model using don?t-know predictions for two concept classes: monotone disjunctions and
linear separators with a margin.
Acknowledgement
The authors are very grateful to Adam Kalai, Sham Kakade and Nina Balcan as well as anonymous
reviewers for helpful discussions and comments.
8
References
[DF88]
Martin E. Dyer and Alan M. Frieze. On the complexity of computing the volume of a
polyhedron. SIAM J. Comput., 17(5):967?974, 1988.
[DFK91] Martin E. Dyer, Alan M. Frieze, and Ravi Kannan. A random polynomial time algorithm
for approximating the volume of convex bodies. J. ACM, 38(1):1?17, 1991.
[DLL09] C. Diuk, L. Li, and B.R. Leffler. The adaptive k-meteorologists problem and its application to structure learning and feature selection in reinforcement learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 249?256.
ACM, 2009.
[GF]
Gasarch and Fletcher. The Egg Game. www.cs.umd.edu/~gasarch/BLOGPAPERS/egg.pdf.
[KK99]
[KS02]
[Lit88]
M. Kearns and D. Koller. Efficient reinforcement learning in factored MDPs. In International Joint Conference on Artificial Intelligence, volume 16, pages 740?747. Citeseer,
1999.
M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2):209?232, 2002.
N. Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine learning, 2(4):285?318, 1988.
[LLW08] L. Li, M.L. Littman, and T.J. Walsh. Knows what it knows: a framework for self-aware
learning. In Proceedings of the 25th international conference on Machine learning, pages
568?575. ACM, 2008.
[LV06]
L?aszl?o Lov?asz and Santosh Vempala. Hit-and-run from a corner. SIAM J. Comput.,
35(4):985?1005, 2006.
[RS88]
R.L. Rivest and R. Sloan. Learning complicated concepts reliably and usefully. In Proceedings AAAI-88, pages 635?639, 1988.
[SL08]
A.L. Strehl and M.L. Littman. Online linear regression and its application to model-based
reinforcement learning. Advances in Neural Information Processing Systems, 20, 2008.
[WSDL] T.J. Walsh, I. Szita, C. Diuk, and M.L. Littman. Exploring compact reinforcementlearning representations with linear regression. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI-09), 2009b.
9
| 4142 |@word version:5 polynomial:15 citeseer:1 diuk:2 mention:2 recursively:1 reduction:1 contains:2 current:7 yet:2 must:7 additive:1 happen:3 partition:1 remove:4 drop:1 update:1 intelligence:2 fewer:1 selected:1 beginning:3 core:31 incorrect:1 consists:1 combine:1 lov:1 expected:1 indeed:1 roughly:1 examine:1 decreasing:3 enumeration:3 increasing:1 becomes:1 abound:1 bounded:1 rivest:1 what:4 minimizes:1 substantially:1 guarantee:1 thorough:1 every:5 usefully:1 exactly:1 demonstrates:1 hit:1 uk:1 unit:4 grant:1 positive:13 before:6 dropped:3 mistake:101 limit:1 approximately:1 might:3 mentioning:1 walsh:3 range:5 reinforcementlearning:1 union:1 significantly:1 word:1 get:6 cannot:6 convenience:1 selection:1 www:1 equivalent:3 reviewer:1 convex:1 formulate:2 pure:1 factored:1 adjusts:1 classic:2 population:3 n12:1 coordinate:8 target:11 suppose:3 pa:2 element:1 particularly:3 continues:1 predicts:1 labeled:1 aszl:1 solved:1 worst:3 trade:2 decrease:8 removed:2 highest:1 complexity:1 littman:4 asked:1 grateful:1 solving:1 tight:2 singh:1 upon:2 easily:2 joint:1 describe:2 dnf:1 query:1 artificial:2 choosing:1 disjunction:16 quite:1 whose:1 larger:5 solve:1 say:17 relax:1 otherwise:7 online:4 sequence:5 propose:2 iff:1 amin:1 description:2 moved:6 getting:1 empty:1 r1:2 produce:1 adam:1 derive:1 depending:1 school:1 c:2 trading:3 direction:1 correct:8 attribute:1 really:1 anonymous:1 opt:1 proposition:3 secondly:1 strictly:2 exploring:1 sufficiently:1 presumably:1 fletcher:1 puzzle:8 predict:5 claim:1 purpose:1 label:15 saw:1 minimization:1 lovasz:1 mit:2 clearly:3 always:2 aim:1 rather:3 kalai:1 conjunction:2 llw08:7 derived:1 polyhedron:1 helpful:1 initially:3 koller:1 interested:1 linearthreshold:1 issue:1 among:3 szita:1 special:1 logc:1 equal:3 aware:1 never:1 having:2 santosh:1 sampling:1 chernoff:2 identical:1 look:2 minimized:1 others:1 few:6 frieze:2 preserve:1 cheaper:2 consisting:2 microsoft:2 n1:2 interest:1 highly:1 possibility:2 arrives:8 accurate:1 modest:1 littlestone:2 instance:3 earlier:1 boolean:1 cost:4 subset:1 too:1 answer:43 st:1 international:3 randomized:1 siam:2 csail:1 standing:1 told:1 off:5 receiving:1 xi1:1 pool:11 quickly:1 again:2 aaai:1 satisfied:1 receipt:1 literal:1 corner:1 inefficient:1 li:4 account:1 singleton:3 explicitly:2 sloan:1 depends:1 later:2 break:3 try:1 view:1 analyze:2 start:1 complicated:1 minimize:1 efficiently:2 famous:1 none:1 worth:1 whenever:4 definition:1 failure:1 proof:9 sampled:5 appears:1 worry:1 response:1 done:3 though:1 generality:2 furthermore:2 just:4 stage:1 anywhere:1 until:2 hand:1 hopefully:1 building:2 concept:12 true:2 y2:1 ccf:1 hence:3 deal:1 round:1 game:6 during:2 self:1 generalized:1 pdf:1 arrived:2 complete:2 performs:1 balcan:1 exponentially:3 volume:35 discussed:2 belong:2 approximates:1 cambridge:2 similarly:1 had:1 moving:1 access:1 add:3 kwik:23 halved:1 showed:1 perspective:2 zadimoghaddam:1 belongs:1 irrelevant:1 forcing:1 binary:1 seen:2 additional:3 greater:1 floor:4 multiple:1 desirable:1 sham:1 violate:1 reduces:3 alan:2 faster:1 england:1 divided:2 prediction:18 basic:2 regression:2 cmu:4 smarter:1 addition:1 want:7 fine:1 interval:1 else:1 completes:1 sends:1 rest:1 umd:1 asz:1 sure:5 comment:1 subject:1 member:1 integer:1 leffler:1 near:1 revealed:2 exceed:3 easy:2 enough:1 decent:1 finish:1 gave:1 reduce:4 idea:3 lesser:1 knowing:2 tradeoff:1 enumerating:1 fragile:1 whether:1 motivated:1 passed:1 generally:1 useful:2 processed:1 reduced:3 exist:3 nsf:1 correctly:1 per:1 write:1 dropping:1 four:1 threshold:1 blum:1 ravi:1 v1:7 relaxation:1 monotone:12 fraction:2 sum:2 run:3 you:5 uncertainty:1 arrive:3 saying:6 reasonable:1 separation:1 meteorologist:1 decision:1 prefer:1 bound:37 hi:4 annual:1 precisely:1 constraint:13 x2:1 aspect:1 answered:3 separable:1 vempala:2 martin:2 department:1 structured:1 smaller:3 slightly:1 lp:1 kakade:1 making:9 happens:1 restricted:1 computationally:1 agree:1 discus:1 turn:2 xi2:1 needed:1 know:49 letting:2 dyer:2 end:1 available:2 multiplied:6 v2:6 generic:1 original:1 running:4 log2:3 folklore:1 especially:1 approximating:1 move:8 objective:1 question:1 already:1 costly:1 minx:2 majority:8 argue:1 induction:1 nina:1 minority:3 kannan:1 length:6 ratio:2 unfortunately:2 xik:1 stated:1 negative:3 design:2 reliably:1 twenty:1 negated:1 allowing:8 finite:3 incorrectly:2 situation:1 looking:1 y1:1 introduced:4 pair:7 required:2 connection:1 learned:5 address:1 able:1 adversary:3 below:2 appeared:1 program:35 including:1 reliable:1 business:1 natural:1 predicting:5 representing:1 improve:1 mdps:1 gf:3 review:1 acknowledgement:1 relative:2 loss:2 expect:1 interesting:3 versus:1 ingredient:1 lit88:5 agent:1 story:1 strehl:1 course:1 supported:1 last:1 soon:1 arriving:1 side:4 allow:3 taking:1 fifth:1 xn:5 author:4 made:3 adaptive:1 reinforcement:4 far:1 compact:1 keep:1 decides:1 reveals:1 uai:1 pittsburgh:2 conclude:1 xi:17 don:33 why:1 additionally:1 learn:8 separator:10 main:3 linearly:1 motivation:1 allowed:8 x1:5 body:1 referred:1 egg:15 exponential:1 comput:2 candidate:2 answering:2 breaking:1 learns:5 formula:1 theorem:5 bad:1 specific:2 dominates:1 exists:4 avrim:2 adding:5 effectively:3 margin:5 morteza:2 easier:1 simply:2 intern:2 n2i:1 contained:1 applies:1 chance:1 acm:3 ma:2 goal:2 replace:1 feasible:4 change:1 hard:4 specifically:1 uniformly:3 lemma:4 kearns:2 called:1 total:5 vote:1 formally:1 |
3,471 | 4,143 | Pose-Sensitive Embedding
by Nonlinear NCA Regression
Graham W. Taylor, Rob Fergus, George Williams, Ian Spiro and Christoph Bregler
Courant Institute of Mathematics, New York University
New York, USA 10003
gwtaylor,fergus,spiro,[email protected]
Abstract
This paper tackles the complex problem of visually matching people in similar
pose but with different clothes, background, and other appearance changes. We
achieve this with a novel method for learning a nonlinear embedding based on
several extensions to the Neighborhood Component Analysis (NCA) framework.
Our method is convolutional, enabling it to scale to realistically-sized images. By
cheaply labeling the head and hands in large video databases through Amazon
Mechanical Turk (a crowd-sourcing service), we can use the task of localizing
the head and hands as a proxy for determining body pose. We apply our method
to challenging real-world data and show that it can generalize beyond hand localization to infer a more general notion of body pose. We evaluate our method
quantitatively against other embedding methods. We also demonstrate that realworld performance can be improved through the use of synthetic data.
1
Introduction
Determining the pose of a human body from one or more images is a central problem in Computer
Vision. The complex, multi-jointed nature of the body makes the determination of pose challenging,
particularly in natural settings where ambiguous and unusual configurations may be observed. The
ability to localize the hands is particularly important: they provide tight constraints on the layout of
the upper body, yielding a strong cue as to the action and intent of a person.
A huge range of techniques, both parametric and non-parametric, exist for inferring body pose from
2D images and 3D datasets [10, 39, 4, 28, 33, 8, 3, 6, 11]. We propose a non-parametric approach to
d=3.20
d=3.65
d=3.88
d=3.90
d=3.91
d=4.02
d=4.17
d=4.31
d=3.93
d=4.58
d=5.05
d=5.24
d=5.35
d=5.40
d=5.47
d=5.49
d=4.29
d=5.00
d=5.09
d=5.21
d=5.29
d=5.51
d=5.57
d=5.60
Figure 1: Query image (in left column) and the eight nearest neighbours found by our method.
Distance in the learned embedded space is shown bottom right. Matches are based on the location
of the hands, and more generally body pose - not the individual or the background.
1
estimating body pose by localizing the hands using a parametric, nonlinear multi-layered embedding
of the raw pixel images. Unlike many other metric learning approaches, ours is designed for use with
real-world images, having a convolutional architecture that scales gracefully to large images and is
invariant to local geometric distortions.
Our embedding, trained on both real and synthetic data, is a functional mapping that projects images
with similar head and hand positions to lie close-by in a low-dimensional output space. Efficient
nearest-neighbour search can then be performed in this space to find images in a large training
corpus that have similar pose. Specifically for this task, we have designed an interface to obtain
and verify head and hand labels for thousands of frames through Amazon Mechanical Turk with
minimal user intervention. We find that our method is able to cope with the terse and noisy labels
provided by crowd-sourcing. It succeeds in generalizing to body and hand pose when such cues are
not explicitly provided in the labels (see Fig. 1).
2
Related work
Our application domain is related to several approaches in the computer vision literature that propose
hand or body pose tracking. Many techniques rely on sliding-window part detectors based on color
and other features applied to controlled recording conditions ([10, 39, 4, 28] to name a few, we
refer to [32] for a complete survey). In our domain, hands might only occupy a few pixels, and the
only body-part that can reliably be detected is the human face ([26, 13]). Many techniques have
been proposed that extract, learn, or reason over entire body features. Some use a combination of
local detectors and structural reasoning (see [33] for coarse tracking and [8] for person-dependent
tracking). In a similar spirit, more general techniques using pictorial structures [3, 12, 35], ?poselets?
[6], and other part-models [11] have received increased attention. An entire new stream of kinematic
model-based techniques based on the HumanEva dataset has been proposed [37], but this area differs
from our domain in that the images considered are of higher quality and less cluttered.
More closely related to our task are nearest-neighbour and locally-weighted regression-based techniques. Some extract ?shape-context? edge based histograms from the human body [25, 1] or just
silhouette features [15]. Shakhnarovich et al. [36] use HOG [9] features and boosting for learning a parameter sensitive hash function. All these approaches rely on good background subtraction
or recordings with clear backgrounds. Our domain contains clutter, lighting variations and low
resolution such that it is impossible to separate body features from background successfully. We
instead learn relevant features directly from pixels (instead of pre-coded edge or gradient histogram
features), and discover implicitly background invariance from training data.
Several other works [36, 9, 4, 15] have used synthetically created data as a training set. We show in
this paper several experiments with challenging real video (with crowd-sourced Amazon Mechanical
Turk labels), synthetic training data, and hybrid datasets. Our final system (after training) is always
applied to the cluttered non-background subtracted real video input without any labels.
Our technique is also related to distance metric learning, an important area of machine learning
research, especially due to recent interest in analyzing complex high-dimensional data. A subset
of approaches for dimensionality reduction [17, 16] implicitly learn a distance metric by learning
a function (mapping) from high-dimensional (i.e. pixel) space to low-dimensional ?feature? space
such that perceptually similar observations are mapped to nearby points on a manifold. Neighbourhood Components Analysis (NCA) [14] proposes a solution where the transformation from input
to feature space is linear and the distance metric is Euclidean. NCA learns the transformation that
is optimal for performing KNN in the feature space. NCA has also been recently extended to the
nonlinear case [34] using MNIST class labels and to linear 1D regression for reinforcement learning
[20]. Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) [16] also learns a nonlinear mapping. Like NCA, DrLIM uses class neighbourhood structure to drive the optimization:
observations with the same class label are driven to be close-by in feature space. Our approach
is also inspired by recent hashing methods [2, 34, 38], although those techniques are restricted to
binary codes for fast lookup.
3
Learning an invariant mapping by nonlinear embedding
We first discuss Neighbourhood Components Analysis [14] and its nonlinear variants. We then propose an alternative objective function optimized for performing nearest neighbour (NN) regression
rather than classification. Next, we describe our convolutional architecture which maps images from
2
high-dimensional to low-dimensional space. Finally we introduce a related but different objective
for our model based on DrLIM.
3.1
Neighbourhood Components Analysis
NCA (both linear and nonlinear) and DrLIM do not presuppose the existence of a meaningful and
computable distance metric in the input space. They only require that neighbourhood relationships
be defined between training samples. This is well-suited for learning a metric for non-parametric
classification (e.g. KNN) on high-dimensional data. If the original data does not contain discrete
class labels, but real-valued labels (e.g. pose information for images of people) one alternative is to
define neighbourhoods based on the distance in the real-valued label space and proceed as usual.
However, if classification is not our ultimate goal, we may wish to exploit the ?soft? nature of the
labels and use an alternative objective (i.e. one that does not optimize KNN performance).
Suppose we are given a set of N labeled training cases {xi , yi }, i = 1, 2, . . . , N , where xi ? RD ,
and yi ? RL . Each training point, i, selects another point, j, as its neighbour with some probability
defined by normalizing distances in the transformed feature space [14]:
exp(?d2ij )
2 ,
k6=i exp(?dik )
pij = P
pii = 0,
dij = ||zi ? zj ||2
(1)
where we use a Euclidean distance metric dij and zi = f (xi |?) is the mapping (parametrized
by ?) from input space to feature space. For NCA this is typically linear, but it can be extended
to be nonlinear through back-propagation (for example in [34] it is a multi-layer neural network).
NCA assumes that the labels, yi , are discrete yi ? 1, 2, . . . , C rather than real-valued and seeks to
maximize the expected number of correctly classified points on the training data which minimizes:
N
X
X
LNCA = ?
pij .
(2)
i=1 j:yi =yj
The parameters are found by minimizing LNCA with respect to ?; back-propagating in the case of
a multi-layer parametrization. Instead of seeking to optimize KNN classification performance, we
can use the NCA regression (NCAR) objective [20]:
N X
X
LNCAR =
pij ||yi ? yj ||22 .
(3)
i=1 j6=i
Intuitively, this states that if, with high probability, i and j are neighbours in feature space, then
they should also lie close-by in label space. While we use the Euclidean distance in label space, our
approach generalizes to other metrics which may be more appropriate for a different domain.
Keller et al. [20] consider the linear case of NCAR, where ? is a weight matrix and y is a scalar
representing Bellman error to map states with similar Bellman errors close together. Similar to
NCA, we can extend this objective to the nonlinear, multi-layer case. We simply need to compute
the derivative of LNCAR with respect to the output of the mapping, zi , and backpropagate through
the remaining layers of the network. The gradient can be computed efficiently as:
X
?LNCAR
2
2
= ?2
(zi ? zj ) pij yij
? ?i + pji yij
? ?j .
(4)
?zi
j6=i
P
2
2
where we use yij
= ||yi ? yj ||22 and ?i = j pij yij
. See the supplementary material for details.
3.2 Convolutional architectures
As [34] points out, nonlinear NCA was originally proposed in [14] but with the exception of a
modest success with a two-layer network in extracting 2D codes that explicitly represented the
size and orientation of face images, attempts to extract more complex properties using multi-layer
feature extraction were less successful. This was due, in part, to the difficulty in training multi-layer
networks and the fact that many data pairs are required to fit the large number of network parameters.
Though both [34] and [38] were successful in learning a multi-layer nonlinear mapping of the data,
there is still a fundamental limitation of using fully-connected networks that must be addressed.
Such an architecture can only be applied to relatively small image patches (typically less than 64
? 64 pixels), because they do not scale well with the size of the input. Salakhutdinov and Hinton
3
escaped this issue by training only on the MNIST dataset (28 ? 28 images of digits) and Torralba
et al. used a global image descriptor [29] as an initial feature representation rather than pixels.
However, to avoid such hand-crafted features which may not be suitable for the task, and to scale to
realistic sized inputs, models should take advantage of the pictorial nature of the image input. This is
addressed by convolutional architectures [21], which exploit the fact that salient motifs can appear
anywhere in the image. By employing successive stages of weight-sharing and feature-pooling,
deep convolutional architectures can achieve stable latent representations at each layer, that preserve
locality, provide invariance to small variations of the input, and drastically reduce the number of free
parameters.
Our proposed method which we call Convolutional NCA regression (C-NCAR) is based on a standard convolutional architecture [21, 18]: alternating convolution and subsampling layers followed
by a single fully-connected layer (see Fig. 2). It differs from typical convolutional nets in the objective function with which it is trained (i.e. minimizing Eq. 3). Because the loss is defined on pairs of
examples, we use a siamese network [5]. Pairs of frames are processed by separate networks with
equal weights. The loss is then computed on the output of both networks. Hadsell et al. [16] also
use a siamese convolutional network with yet a different objective. They use their method for visualization but not any discriminative task. Mobahi et al. [24] have also recently used a convolutional
siamese network in which temporal coherence between pairs of frames drives the regularization of
the model rather than the objective. More details of training our network are given in Sec. 4.
Input:
128?128
Layer 1:
16?120?120
Layer 2:
16?24?24
Layer 3:
32?16?16
Layer 4:
32?4?4
xi
xj
Output:
32?1?1
d(z i ,z j )
Convolutions,
tanh(), abs()
Average
pooling
Convolutions,
tanh(), abs()
Average
pooling
Fully
connected
Figure 2: Convolutional NCA regression (C-NCAR). Each image is processed by two convolutional
and subsampling layers and one fully-connected layer. A loss (Eq. 3) computed on the distance
between resulting codes drives parameter learning.
3.3 Adding a contrastive loss function
Like NCA, DrLIM assumes a discrete notion of similarity or dissimilarity between data pairs, xi
and xj . It defines both a ?similarity? loss, Ls , which penalizes similar points which are far apart
in code space, and a ?dissimilarity? loss, LD , which penalizes dissimilar points which lie within a
user-defined margin, m, of each other:
1
1
LD (xi , xj ) = {max(0, m ? dij )}2
(5)
LS (xi , xj ) = d2ij
2
2
where dij is given by Eq. 1. Let ?ij be an indicator such that ?ij = 1 if xi and xj are deemed
similar and ?ij = 1 if xi and xj are deemed dissimilar. For example, if labels yi are discrete
yi ? 1, 2, . . . , C, then ?ij = 1 for yi = yj and ?ij = 0 otherwise. The total loss is defined by:
N X
X
LDrLIM =
?ij Ls (xi , xj ) + (1 ? ?ij )LD (xi , xj ).
(6)
i=1 j6=i
When faced with real-valued labels, yi , we can avoid explicitly defining similarity and dissimilarity
(e.g. via thresholding) by defining a ?soft? notion of similarity:
exp(?||yi ? yj ||22 )
(7)
??ij = P
2 .
k6=i exp(?||yi ? yj ||2 )
Replacing the indicator variables ?ij with ??ij in Eq. 6 yields what we call the soft DrLIM loss.
4
4
Experimental results
We evaluate our approach in real and synthetic environments by performing 1-nearest neighbour
(NN) regression using a variety of standard and learned metrics described below. For every query
image in a test set, we compute its distance (under the metric) to each of the training points in a
database. We then copy the label (e.g. (x,y) position of the head and hands) of the neighbour to the
query example. For evaluation, we compare the ground-truth label of the query to the label of the
nearest neighbour. Errors are reported in terms of mean pixel error over each query and each marker:
the head (if it is tracked) and each hand. Errors are absolute with respect to the original image size.
We acknowledge that improved results could potentially be obtained by using more than one neighbour or with more sophisticated techniques such as locally weighted regression [36]. However, we
focus on learning a good metric for performing this task rather than the regression problem. The
approaches compared are:
Pixel distance can be used to find nearest neighbours though it is not practical in real situations due
to the intractability of computing distances in such a high-dimensional space.
GIST descriptors [29] are a global representation of image content.We are motivated to use GIST
by its previous use in nonlinear NCA for image retrieval [38]. The resulting image representation
is a length-512 vector. We note that this is still too large for efficient NN search and that the GIST
features are not domain-adaptive.
Linear NCA regression (NCAR) is described in Section 3. We pre-compute GIST for each image
and use that as our input representation. We learn a 512 ? 32 matrix of weights by minimizing
Eq. 3 using nonlinear conjugate gradients with randomly sampled mini-batches of size 512. We
perform three line-searches per mini-batch and stop learning after 500 mini-batches. We found that
our results slightly improved when we applied a form of local contrast normalization (LCN) prior
to computing GIST. Each pixel?s response was normalized by the integrated response of a 9 ? 9
window of neighbouring pixels. For more details see [30].
Convolutional NCA regression (C-NCAR) See Fig. 2 for a summary of our architecture. Images
are pre-processed using LCN. Convolutions are followed by pixel-wise tanh and absolute value
rectification. The abs prevents cancellations in local neighbourhoods during average downsampling
[18]. Our architectural parameters (size of filters, number of filter banks, etc.) are chosen to produce
a 32-dimensional output. Derivations of parameter updates are presented as supplementary material.
Soft DrLIM (S-DrLIM) and Convolutional soft DrLIM (CS-DrLIM) We also experiment with a
variant of an alternative, energy-based method that adds an explicit contrastive loss to the objective
rather than implicitly through normalization. The contrastive loss only operates on dissimilar points
which lie within a specified margin, m, of each other. We use m = 1.25 as suggested by [16].
In both the linear and nonlinear case, the architecture and training procedure remains the same as
NCAR and C-NCAR, respectively. We use a different objective: minimizing Eq. 6 with respect to
the parameters.
4.1 Estimating 2D head and hand pose from synthetic data
We extracted 10,000 frames of training data and 5,000 frames of test data from Poser renderings
of several hours of real motion capture data. Our synthetic data is similar to that considered in [36],
however, we use a variety of backgrounds rather than a constant background. Furthermore, subjects
are free to move around the frame and are rendered at various scales. The training set contains 6
different characters superimposed on 9 different backgrounds. The test set contains 6 characters and
8 backgrounds not present in the training set. The inputs, x, are 320 ? 240 images, and the labels,
y, are 6D vectors - the true (x,y) locations of the head and hands.
Results are shown in Table 1 (column SY). Simple linear NCAR performs well compared to the
baselines, while our nonlinear methods C-NCAR and CS-DrLIM (which are not restricted to the
GIST descriptor) significantly outperform all other approaches. Pixel-based matching (though extremely slow) does surprisingly well. This is perhaps an artifact of the synthetic data.
4.2
Estimating 2D hand pose from real video
We digitally recorded all of the contributing and invited speakers at the Learning Workshop (Snowbird) held in April 2010. The set consisted of 30 speakers, with talks ranging from 10-40 minutes
each. After each session of talks, blocks of 150 frames were distributed as Human Intelligence Tasks
5
Table 1: 1-NN regression performance on the synthetic (SY) dataset and the real (RE) dataset. Results are divided into baselines (no learning), linear embeddings and nonlinear embeddings. Errors
are the mean pixel distance between the nearest neighbour and the ground truth label of the query.
For SY we locate the head and both hands. For RE we assume the location and scale of the head is
given by a face detector and only locate the hands. The images at right indicate: (top) a radius of
25.40 pixels with respect to the 320?240 SY input; (bottom) a radius of 16.41 pixels with respect to
the 128?128 RE input. Images have been scaled for the plot.
Embedding
None
None
PCA
PCA
NCAR
NCAR
S-DrLIM
Boost-SSC [36]
C-NCAR
CS-DrLIM
Input
Pixels
GIST
GIST
GIST
GIST
LCN+GIST
GIST
LCN+GIST
LCN
LCN
Dim
16384
512
128
32
32
32
32
32
32
32
Error-SY
32.86
47.41
47.17
48.99
34.21
32.90
37.80
34.80
28.95
25.40
Error-RE
25.12
25.13
24.85
25.74
24.93
23.15
25.19
22.65
16.41
19.61
on Amazon Mechanical Turk. We were able to obtain accurate hand and head tracks for each of the
speakers within a few hours of their talks. For the following experiments, we divided the 30 speakers
into a training set (odd numbered speakers) and test set (even numbered speakers).
Since current state-of-the-art face detection algorithms work reasonably well, we concentrate on the
harder problem of tracking the speakers? hands. We first run a commercial face detection algorithm
[26] on all frames which provides an estimate of scale for every frame. We use the average scale (per
video) estimated by the face detector to crop and rescale each frame to a 128x128 image (centered
on the head) that contains the speaker at roughly the same scale as other speakers (there is some
variability due to using average scale per video as speakers move throughout their talks). A similar
preprocessing step was used in [12]. We do not consider cases in which the hands lie outside
the frame or are occluded. This yields 39,792 and 37,671 training and test images, respectively,
containing the head and both hands. Since the images are head-centered, the labels, y, used during
training are the 4-dimensional vector containing the relative offset of each hand from the head.
We emphasize that finding the hands is an extremely difficult task (sometimes even for human subjects). Frames are low-resolution (typically the hands are 10-15 pixels in diameter) and contain
camera movement as well as frequently poor lighting. While previous work has assumed static
backgrounds, we confront the changing backgrounds and aim to learn invariance to both scene and
subject identity.
Results are shown in Table 1 (column RE). They are organized into three groups: baselines (highdimensional), and learning-based methods both linear and nonlinear. The linear methods are able
to achieve performance comparable to the baseline with the important attribute that distances are
computed in a 32-dimensional space. If the codes are made binary (as in [38]) we could use fast
approximate hashing techniques to permit real-time tracking using a database of well over 1 million
examples. The nonlinear methods show a dramatic improvement over the linear methods, especially
our convolutional architectures which learn features from pixels. Boost-SSC [36] is based on a
global representation similar to GIST, and so it is restricted in domain adaptivity. We also investigate
the performance of C-NCAR on code size (Fig. 5(a)). Performance is impressive even when the
dimension in which we compute distances is reduced from 32 to 2. A visualization of the 2D
embedding is shown in Fig. 3.
Fig. 4 shows some examples of nearest-neighbour matches under several different metrics. Most
apparent is that our methods, and in particular C-NCAR, develop invariance to background and focus
on the subject?s pose. Both pixel-based and GIST-batch matching are highly driven by the scene
(including lighting and background). Though our method is trained only on the relative positions
of the hands from the head, it appears to capture something more substantial about body pose in
general. We plan on evaluating this result quantitatively, using synthetic data in which we have
access to an articulated skeleton.
6
16.41 px
c1
c1
c1
3
2
1
c2
c2
c2
7
5
4 6
1
2
3
4
5
6
7
2
c3
c3
c3
1
3
5
c4
1
2
c4
5
4
4
3
6
Figure 3: Visualization of the 2D C-NCAR embedding of 1024 points from the RE training set. We
show the data points and their local geometry within four example clusters: C1-C4. Note that even
with a 2D embedding, we are able to capture pose similarity invariant of subject and background.
Query
C-NCAR
NCAR
GIST
Pixels
E=1.53
E=10.55
E=9.91
E=9.91
E=1.88
E=27.55
E=10.02
E=23.42
E=2.41
E=20.00
E=19.70
E=20.64
E=2.61
E=8.97
E=18.84
E=30.54
Figure 4: Nearest neighbour pose estimation. The leftmost column shows the query image, and
the remaining columns (left to right) show the nearest neighbour found by: nonlinear C-NCAR
regression, linear NCAR, GIST, pixel distance. Circles mark the pose obtained by crowd-sourcing;
we superimpose the pose estimated by C-NCAR onto the query with crosses.
4.3
Improving real-world performance with synthetic data
There has been recent interest in using synthetic examples to improve performance on real-world
vision tasks (e.g. [31]). The subtle differences between real and synthetic data make it difficult to
apply existing techniques to a dataset comprised of both types of examples. This problem falls under
the domain of transfer learning, but to the best of our knowledge, transfer learning between real and
synthetic pairings is relatively unexplored. While previous work has attempted to learn representations that are invariant to such effects as geometric distortions of the input [16] and temporal shifts
[5, 24] we know of no previous work that has explicitly attempted to learn features that are invariant
to the nature of the input, that is, real or synthetic.
7
1.06
18.5
1.04
Relative error (test)
Pixel error (test)
19
18
17.5
17
1
0.98
0.96
No synthetic
NCAR?1
NCAR?2
0.94
16.5
16
2
1.02
4
8
Dimension of code
16
0.92
256
32
(a)
512
1024
2048
Number of Synthetic Examples
4096
(b)
Figure 5: (a) Effect of code size on the performance of Convolutional NCA regression. (b) Adding
synthetic data to a fixed dataset of 1024 real examples to improve test performance measured on
real data. Error is expressed relative to a training set with no synthetic data. NCAR-1 does not
re-initialize weights when more synthetic examples are added. NCAR-2 reinitializes weights to
the same random seed for each run. The curves show that adding synthetic examples improve
performance up to a point at which the synthetic examples outnumber the real examples 2:1.
The pairwise nature of our approach is well-suited to learning such invariance, provided that we have
established correspondences between real and synthetic examples. In our case of pose estimation,
this comes from the labels. By forcing examples with similar poses (regardless of whether they are
real or synthetic) to lie close-by in code space we can implicitly produce a representation at each
layer that is invariant to the nature of the input. We have not made an attempt to restrict pairings to
be only between real and synthetic examples, though this may further aid in learning invariance.
Fig. 5(b) demonstrates the effect of gradually adding synthetic examples from SY to the RE training
dataset. We use a reduced-size set of 1024 real examples for training which is gradually modified
to contain synthetic examples and a fixed set of 1024 real examples for testing. Error is expressed
relative to the case of no synthetic examples. We use Linear NCA for this experiment and train as
described above. We follow two different regimes. In NCAR-1 we do not reset the weights of the
model to random each time we adjust the training set to add more synthetic examples. We simply
add more synthetic data and continue learning. In NCAR-2 we reset the weights to the same random
seed for each run. The overall result is the same for each regime: the addition of synthetic examples
to the training set improves test performance on real data up to a level at which the number of
synthetic examples is double the number of real examples.
5
Conclusions
We have presented a nonparametric approach for pose estimation in realistic, challenging video
datasets. At the core of our method is a learned parametric mapping from high-dimensional space to
a low-dimensional space in which distance is efficiently computed. Our work differs from previous
attempts at learning invariant mappings in that it is optimized for nearest neighbour regression rather
than classification and it scales to realistic sized images through the use of convolution and weightsharing. This permits us to learn domain-adaptive features directly from pixels rather than relying
on hand-crafted features or global descriptors.
In our experiments, we have restricted ourselves to 1-NN matching, but we plan to investigate other
more sophisticated approaches such as locally weighted regression, or using the match as an initialization for a gradient descent search in a parametric model. Though we work with video, our model
does not rely on any type of temporal coherence. Integrating temporal knowledge in the form of a
prior would benefit our approach. Alternatively, temporal context could be integrated at the input
level, from simple frame differencing to more sophisticated temporal feature extraction (e.g. [23]).
Our entire network is trained end-to-end with a single objective, and we do not perform any network pre-training as in [34, 38]. Recent work has demonstrated that pre-training can successfully
be applied to convolutional architectures, both in the context of RBMs [22, 27] and sparse coding [19]. We intend to investigate the effect of pre-training, as well as the use of mixed generative
and discriminative objectives.
8
References
[1] A. Agarwal, B. Triggs, I. Rhone-Alpes, and F. Montbonnot. Recovering 3D human pose from monocular images. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 28(1):44?58, 2006.
[2] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In FOCS, pages
459?468, 2006.
[3] M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009.
[4] V. Athitsos, J. Alon, S. Sclaroff, and G. Kollios. Boostmap: A method for efficient approximate similarity rankings. CVPR, 2004.
[5] S. Becker and G. Hinton. Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 355(6356):161?163,
1992.
[6] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3d human pose annotations. In ICCV, sep 2009.
[7] J. Bouvrie. Notes on convolutional neural networks. Unpublished, 2006.
[8] P. Buehler, A. Zisserman, and M. Everingham. Learning sign language by watching TV (using weakly aligned subtitles). CVPR, 2009.
[9] N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. ECCV, 2006.
[10] A. Farhadi, D. Forsyth, and R. White. Transfer Learning in Sign language. In CVPR, 2007.
[11] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In CVPR, 2008.
[12] V. Ferrari, M. Marin-Jimenez, and A. Zisserman. Pose search: Retrieving people using their pose. In CVPR, 2009.
[13] A. Frome, G. Cheung, A. Abdulkader, M. Zennaro, B. Wu, A. Bissacco, H. Adam, H. Neven, and L. Vincent. Large-scale Privacy
Protection in Google Street View. In ICCV, 2009.
[14] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In NIPS, 2004.
[15] K. Grauman, G. Shakhnarovich, and T. Darrell. Inferring 3d structure with a statistical image-based shape model. In ICCV, pages
641?648, 2003.
[16] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, pages 1735?1742, 2006.
[17] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 ? 507, 2006.
[18] K. Jarrett, K. Kavukcuoglu, M-A Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In ICCV,
2009.
[19] K. Kavukcuoglu, M-A Ranzato, and Y. LeCun. Fast inference in sparse coding algorithms with applications to object recognition.
Technical report, NYU, 2008. CBLL-TR-2008-12-01.
[20] P. Keller, S. Mannor, and D. Precup. Automatic basis function construction for approximate dynamic programming and reinforcement
learning. In ICML, pages 449?456, 2006.
[21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278?
2324, 1998.
[22] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical
representations. In ICML, pages 609?616, 2009.
[23] R. Memisevic and G. Hinton. Unsupervised learning of image transformations. In CVPR, 2007.
[24] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In ICML, pages 737?744, 2009.
[25] G. Mori and J. Malik. Estimating human body configurations using shape context matching. ECCV, 2002.
[26] M. Nechyba, L. Brandy, and H. Schneiderman. Pittpatt face detection and tracking for the CLEAR 2007 evaluation. Multimodal
Technologies for Perception of Humans, 2008.
[27] M. Norouzi, M. Ranjbar, and G. Mori. Stacks of convolutional restricted boltzmann machines for shift-invariant feature learning. In
CVPR, 2009.
[28] S.J. Nowlan and J.C. Platt. A convolutional neural network hand tracker. In NIPS, 1995.
[29] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of
Computer Vision, 42(3):145?175, 2001.
[30] N. Pinto, D. Cox, and J. DiCarlo. Why is real-world visual object recognition hard? PLoS Comput Biol, 4(1), 2008.
[31] N. Pinto, D. Doukhan, J. DiCarlo, and David D. Cox. A high-throughput screening approach to discovering good forms of biologically
inspired visual representation. PLoS Comput Biol, 5(11), 11 2009.
[32] R. Poppe. Vision-based human motion analysis: An overview. Computer Vision and Image Understanding, 108(1-2):4?18, 2007.
[33] D. Ramanan, D. Forsyth, and A. Zisserman. Strike a pose: Tracking people by finding stylized poses. In CVPR, 2005.
[34] R. Salakhutdinov and G. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. In AISTATS, volume 11,
2007.
[35] B. Sapp, C. Jordan, and B.Taskar. Adaptive pose priors for pictorial structures. In CVPR, 2010.
[36] G. Shakhnarovich, P. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing. In ICCV, pages 750?759, 2003.
[37] L. Sigal, A. Balan, and Black. M. J. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation
of articulated human motion. IJCV, 87(1/2):4?27, 2010.
[38] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In CVPR, 2008.
[39] C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland. Pfinder: Real-time tracking of the human body. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 19(7):780?785, 1997.
9
| 4143 |@word cox:2 dalal:1 everingham:1 triggs:2 strong:1 seek:1 contrastive:3 dramatic:1 tr:1 harder:1 ld:3 reduction:3 initial:1 configuration:2 contains:4 jimenez:1 ours:1 document:1 poser:1 existing:1 current:1 ncar:27 protection:1 nowlan:1 goldberger:1 yet:1 must:1 realistic:3 shape:4 designed:2 gist:17 update:1 plot:1 hash:1 cue:2 intelligence:3 generative:1 discovering:1 parametrization:1 bissacco:1 core:1 coarse:1 boosting:1 provides:1 location:3 successive:1 rhone:1 revisited:1 x128:1 mannor:1 c2:3 pairing:2 focs:1 retrieving:1 ijcv:1 privacy:1 introduce:1 pairwise:1 expected:1 roughly:1 frequently:1 multi:9 bellman:2 inspired:2 salakhutdinov:4 relying:1 window:2 farhadi:1 project:1 estimating:4 provided:3 discover:1 what:2 minimizes:1 clothes:1 transformation:3 finding:2 temporal:7 every:2 unexplored:1 tackle:1 grauman:1 scaled:1 demonstrates:1 platt:1 ramanan:2 intervention:1 appear:1 service:1 local:5 marin:1 analyzing:1 might:1 black:1 initialization:1 christoph:1 challenging:4 doukhan:1 range:1 jarrett:1 nca:20 practical:1 camera:1 yj:6 testing:1 lecun:4 block:1 differs:3 digit:1 procedure:1 area:2 significantly:1 matching:5 pre:6 integrating:1 numbered:2 onto:1 close:5 layered:1 context:4 impossible:1 optimize:2 ranjbar:1 map:2 demonstrated:1 roth:1 williams:1 layout:1 attention:1 cluttered:2 keller:2 survey:1 resolution:2 hadsell:2 amazon:4 l:3 alpes:1 embedding:11 notion:3 variation:2 ferrari:1 construction:1 suppose:1 commercial:1 user:2 neighbouring:1 programming:1 us:1 recognition:5 particularly:2 database:4 labeled:1 observed:1 bottom:2 taskar:1 subtitle:1 capture:4 thousand:1 d2ij:2 connected:4 ranzato:2 plo:2 movement:1 digitally:1 substantial:1 environment:1 schiele:1 skeleton:1 stereograms:1 occluded:1 dynamic:1 trained:6 weakly:1 shakhnarovich:3 tight:1 localization:1 basis:1 sep:1 multimodal:1 stylized:1 represented:1 various:1 talk:4 derivation:1 articulated:3 train:1 fast:4 describe:1 query:9 detected:1 labeling:1 presuppose:1 neighborhood:1 crowd:4 sourced:1 outside:1 apparent:1 supplementary:2 valued:4 cvpr:12 distortion:2 otherwise:1 ability:1 knn:4 terse:1 noisy:1 final:1 indyk:1 advantage:1 net:1 propose:3 reset:2 relevant:1 aligned:1 organizing:1 holistic:1 achieve:3 deformable:1 realistically:1 roweis:1 spiro:2 cluster:1 double:1 darrell:3 produce:2 adam:1 object:3 alon:1 develop:1 propagating:1 pose:33 measured:1 rescale:1 ij:10 snowbird:1 nearest:13 odd:1 received:1 eq:6 recovering:1 c:4 frome:1 indicate:1 pii:1 poselets:2 come:1 concentrate:1 synchronized:1 radius:2 closely:1 attribute:1 filter:2 centered:2 human:13 mcallester:1 material:2 require:1 bregler:2 extension:1 yij:4 around:1 considered:2 ground:2 tracker:1 visually:1 exp:4 seed:2 mapping:11 torralba:3 estimation:5 proc:1 wren:1 label:23 tanh:3 weightsharing:1 sensitive:3 successfully:2 weighted:3 always:1 aim:1 modified:1 rather:9 avoid:2 buehler:1 focus:2 improvement:1 superimposed:1 contrast:1 baseline:5 dim:1 inference:1 dependent:1 motif:1 nn:5 neven:1 entire:3 typically:3 integrated:2 transformed:1 selects:1 pixel:23 issue:1 classification:5 orientation:1 overall:1 k6:2 proposes:1 plan:2 art:1 andriluka:1 initialize:1 spatial:1 equal:1 having:1 extraction:2 ng:1 icml:3 unsupervised:2 throughput:1 report:1 quantitatively:2 few:3 randomly:1 neighbour:16 oriented:1 preserve:1 individual:1 pictorial:4 geometry:1 ourselves:1 attempt:3 ab:3 detection:5 huge:1 interest:2 screening:1 investigate:3 kinematic:1 highly:1 evaluation:3 adjust:1 yielding:1 held:1 accurate:1 edge:2 modest:1 taylor:1 euclidean:3 penalizes:2 re:8 circle:1 minimal:1 increased:1 column:5 soft:5 modeling:1 localizing:2 subset:1 comprised:1 successful:2 dij:4 too:1 reported:1 synthetic:31 person:2 fundamental:1 international:1 memisevic:1 lee:1 together:1 precup:1 central:1 recorded:1 containing:2 ssc:2 watching:1 derivative:1 lookup:1 sec:1 coding:2 forsyth:2 explicitly:4 ranking:1 stream:1 collobert:1 performed:1 view:1 annotation:1 convolutional:22 descriptor:4 efficiently:2 sy:6 yield:2 generalize:1 raw:1 vincent:1 kavukcuoglu:2 norouzi:1 none:2 lighting:3 drive:3 j6:3 classified:1 detector:5 sharing:1 against:1 energy:1 rbms:1 turk:4 static:1 outnumber:1 sampled:1 stop:1 dataset:8 athitsos:1 color:1 knowledge:2 dimensionality:4 improves:1 organized:1 humaneva:2 subtle:1 sapp:1 sophisticated:3 back:2 appears:1 higher:1 courant:1 hashing:4 originally:1 follow:1 response:2 improved:3 april:1 bourdev:1 zisserman:3 wei:1 though:6 furthermore:1 just:1 anywhere:1 stage:2 hand:29 replacing:1 nonlinear:21 marker:1 propagation:1 google:1 multiscale:1 defines:1 quality:1 perhaps:1 artifact:1 name:1 effect:4 usa:1 verify:1 contain:3 normalized:1 true:1 consisted:1 regularization:1 alternating:1 white:1 during:2 self:1 ambiguous:1 speaker:10 leftmost:1 complete:1 demonstrate:1 performs:1 motion:4 interface:1 reasoning:1 image:39 wise:1 ranging:1 novel:1 recently:2 discovers:1 functional:1 rl:1 tracked:1 overview:1 volume:1 million:1 extend:1 refer:1 rd:1 automatic:1 mathematics:1 session:1 cancellation:1 language:2 dot:1 stable:1 access:1 similarity:6 impressive:1 surface:1 etc:1 add:3 something:1 recent:4 driven:2 apart:1 forcing:1 binary:2 success:1 continue:1 yi:13 preserving:1 george:1 montbonnot:1 subtraction:1 maximize:1 strike:1 sliding:1 siamese:3 infer:1 technical:1 match:3 determination:1 cross:1 escaped:1 retrieval:1 divided:2 coded:1 controlled:1 variant:2 regression:17 oliva:1 crop:1 vision:6 metric:12 confront:1 scalable:1 histogram:3 normalization:2 sometimes:1 agarwal:1 c1:4 background:16 addition:1 addressed:2 invited:1 envelope:1 unlike:1 lcn:6 recording:2 pooling:3 subject:5 flow:1 spirit:1 jordan:1 call:2 extracting:1 structural:1 near:1 chopra:1 synthetically:1 bengio:1 embeddings:2 rendering:1 variety:2 xj:8 fit:1 zi:5 architecture:12 restrict:1 reduce:1 haffner:1 computable:1 shift:2 whether:1 motivated:1 pca:2 kollios:1 ultimate:1 becker:1 dik:1 york:2 proceed:1 action:1 deep:3 generally:1 clear:2 clutter:1 nonparametric:1 locally:3 processed:3 diameter:1 reduced:2 occupy:1 outperform:1 exist:1 zj:2 sign:2 estimated:2 correctly:1 per:3 track:1 discrete:4 group:1 salient:1 four:1 localize:1 changing:1 realworld:1 run:3 schneiderman:1 throughout:1 architectural:1 wu:1 patch:1 coherence:3 jointed:1 graham:1 comparable:1 layer:18 followed:2 correspondence:1 constraint:1 scene:3 nearby:1 extremely:2 brandy:1 performing:4 rendered:1 relatively:2 px:1 tv:1 combination:1 poor:1 conjugate:1 slightly:1 character:2 rob:1 biologically:1 intuitively:1 invariant:10 restricted:5 gradually:2 iccv:5 rectification:1 monocular:1 visualization:3 remains:1 mori:2 discus:1 know:1 end:2 unusual:1 generalizes:1 permit:2 apply:2 eight:1 hierarchical:1 appropriate:1 neighbourhood:9 subtracted:1 alternative:4 batch:4 pji:1 existence:1 original:2 assumes:2 remaining:2 subsampling:2 top:1 regardless:1 exploit:2 especially:2 seeking:1 objective:12 move:2 added:1 intend:1 malik:2 parametric:7 usual:1 gradient:5 distance:18 separate:2 mapped:1 parametrized:1 street:1 gracefully:1 manifold:1 reason:1 code:10 length:1 dicarlo:2 relationship:1 mini:3 minimizing:4 downsampling:1 differencing:1 difficult:2 potentially:1 hog:1 bouvrie:1 intent:1 reliably:1 boltzmann:1 perform:2 upper:1 observation:2 convolution:5 datasets:3 enabling:1 acknowledge:1 descent:1 pentland:1 defining:2 extended:2 hinton:6 head:16 situation:1 frame:13 locate:2 variability:1 reinitializes:1 stack:1 superimpose:1 david:1 pair:5 mechanical:4 required:1 specified:1 optimized:2 c3:3 unpublished:1 c4:3 learned:3 established:1 hour:2 boost:2 nip:2 beyond:1 able:4 suggested:1 below:1 pattern:2 perception:1 regime:2 max:1 including:1 video:10 belief:1 suitable:1 natural:1 rely:3 hybrid:1 difficulty:1 indicator:2 representing:1 improve:3 technology:1 created:1 deemed:2 extract:3 schmid:1 viola:1 faced:1 prior:3 geometric:2 literature:1 understanding:1 determining:2 contributing:1 relative:5 embedded:1 fully:4 loss:10 discriminatively:1 adaptivity:1 mixed:1 limitation:1 pij:5 proxy:1 thresholding:1 sigal:1 bank:1 intractability:1 eccv:2 balan:1 summary:1 sourcing:3 surprisingly:1 free:2 copy:1 drastically:1 institute:1 fall:1 neighbor:1 face:7 felzenszwalb:1 absolute:2 sparse:2 distributed:1 benefit:1 curve:1 dimension:3 world:5 evaluating:1 made:2 reinforcement:2 adaptive:3 preprocessing:1 employing:1 far:1 cope:1 transaction:2 ranganath:1 approximate:4 emphasize:1 implicitly:4 silhouette:1 global:4 corpus:1 assumed:1 fergus:3 xi:11 discriminative:2 alternatively:1 search:5 latent:1 why:1 table:3 nature:7 learn:9 reasonably:1 transfer:3 improving:1 bottou:1 complex:4 domain:9 aistats:1 body:18 fig:7 crafted:2 grosse:1 slow:1 aid:1 gwtaylor:1 inferring:2 position:3 wish:1 explicit:1 comput:2 lie:6 learns:2 ian:1 minute:1 mobahi:2 nyu:2 offset:1 normalizing:1 workshop:1 mnist:2 andoni:1 adding:4 dissimilarity:3 perceptually:1 margin:2 sclaroff:1 suited:2 backpropagate:1 generalizing:1 locality:1 azarbayejani:1 simply:2 appearance:2 cheaply:1 visual:2 prevents:1 expressed:2 tracking:8 scalar:1 pinto:2 truth:2 extracted:1 weston:1 sized:3 goal:1 identity:1 cheung:1 content:1 change:1 hard:1 specifically:1 typical:1 operates:1 reducing:1 total:1 invariance:6 experimental:1 succeeds:1 attempted:2 meaningful:1 exception:1 highdimensional:1 people:5 mark:1 dissimilar:3 evaluate:2 biol:2 |
3,472 | 4,144 | Lower Bounds on Rate of Convergence of Cutting
Plane Methods
Xinhua Zhang
Dept. of Computing Science
University of Alberta
[email protected]
Ankan Saha
Dept. of Computer Science
University of Chicago
[email protected]
S.V. N. Vishwanathan
Dept. of Statistics and
Dept. of Computer Science
Purdue University
[email protected]
Abstract
In a recent paper Joachims [1] presented SVM-Perf, a cutting plane method
(CPM) for training linear Support Vector Machines (SVMs) which converges to
an accurate solution in O(1/2 ) iterations. By tightening the analysis, Teo et al.
[2] showed that O(1/) iterations suffice. Given the impressive convergence speed
of CPM on a number of practical problems, it was conjectured that these rates
could be further improved. In this paper we disprove this conjecture. We present
counter examples which are not only applicable for training linear SVMs with
hinge loss, but also hold for support vector methods which optimize a multivariate performance score. However, surprisingly, these problems are not inherently
hard. By exploiting the structure
? of the objective function we can devise an algorithm that converges in O(1/ ) iterations.
1
Introduction
There has been an explosion of interest in machine learning over the past decade, much of which
has been fueled by the phenomenal success of binary Support Vector Machines (SVMs). Driven by
numerous applications, recently, there has been increasing interest in support vector learning with
linear models. At the heart of SVMs is the following regularized risk minimization problem:
n
?
1X
2
min J(w) := kwk + Remp (w) with Remp (w) :=
max(0, 1 ? yi hw, xi i). (1)
w
n i=1
|2 {z } | {z }
regularizer
empirical risk
Here we assume access to a training set of n labeled examples {(xi , yi )}ni=1 where xi ? Rd and yi ?
P
2
{?1, +1}, and use the square Euclidean norm kwk = i wi2 as the regularizer. The parameter ?
controls the trade-off between the empirical risk and the regularizer.
There has been significant research devoted to developing specialized optimizers which minimize
J(w) efficiently. In an award winning paper, Joachims [1] presented a cutting plane method
(CPM)1 , SVM-Perf, which was shown to converge to an accurate solution of (1) in O(1/2 ) iterations, with each iteration requiring O(nd) effort. This was improved by Teo et al. [2] who showed
that their Bundle Method for Regularized Risk Minimization (BMRM) (which encompasses SVMPerf as a special case) converges to an accurate solution in O(nd/) time.
While online learning methods are becoming increasingly popular for solving (1), a key advantage
of CPM such as SVM-Perf and BMRM is their ability to directly optimize nonlinear multivariate
performance measures such as F1 -score, ordinal regression loss, and ROCArea which are widely
used in some application areas. In this case Remp does not decompose into a sum of losses over
?)
individual data points like in (1), and hence one has to employ batch algorithms. Letting ?(y, y
denote the multivariate discrepancy between the correct labels y := (y1 , . . . , yn )> and a candidate
? (to be concretized later), the Remp for the multivariate measure is formulated by [3] as
labeling y
1
In this paper we use the term cutting plane methods to denote specialized solvers employed in machine
learning. While clearly related, they must not be confused with cutting plane methods used in optimization.
1
"
Remp (w) =
max
? ?{?1,1}n
y
#
n
1X
?) +
?(y, y
hw, xi i (?
yi ? yi ) .
n i=1
(2)
In another award winning paper by Joachims [3], the regularized risk minimization problems corresponding to these measures are optimized by using a CPM.
Given the widespread use of CPM in machine learning, it is important to understand their convergence guarantees in terms of the upper and lower bounds on the number of iterations needed to
converge to an accurate solution. The tightest, O(1/), upper bounds on the convergence speed
of CPM is due to Teo et al. [2], who analyzed a restricted version of BMRM which only optimizes
over one dual variable per iteration. However, on practical problems the observed rate of convergence is significantly faster than predicted by theory. Therefore, it had been conjectured that the
upper bounds might be further tightened via a more refined analysis. In this paper we construct
counter examples for both decomposable Remp like in equation (1) and non-decomposable Remp
like in equation (2), on which CPM requires ?(1/) iterations to converge, thus disproving this conjecture2 . We will work with BMRM as our prototypical CPM. As Teo et al. [2] point out, BMRM
includes many other CPM such as SVM-Perf as special cases.
Our results lead to the following natural question: Do the lower bounds hold because regularized
risk minimization problems are fundamentally hard, or is it an inherent limitation of CPM? In other
words, to solve problems such as (1), does there exist a solver which requires less than O(nd/)
effort (better in n, d and )? We provide partial answers. To understand our contribution one needs
to understand the two standard assumptions that are made when proving convergence rates:
? A1: The data points xi lie inside a L2 (Euclidean) ball of radius R, that is, kxi k ? R.
? A2: The subgradient of Remp is bounded, i.e., at any point w, there exists a subgradient g
of Remp such that kgk ? G < ?.
Clearly assumption A1
? is more restrictive than A2. By adapting a result due to [6] we show that one
can devise an O(nd/ ) algorithm for the case when assumption A1 holds. Finding a fast optimizer
under assumption A2 remains an open problem.
Notation: Lower bold case letters (e.g., w, ?) denote vectors, wi denotes the i-th component of
w, 0 refers to the vector with all zero components, ei is the i-th coordinate vector (all 0?s except
1 at the i-th coordinate) and ?k refers to the k dimensional
simplex. Unless specified otherwise,
P
h?, ?i denotes the Euclidean dot product hx, wi = i xi wi , and k?k refers to the Euclidean norm
1/2
kwk := (hw, wi) . We denote R := R ? {?}, and [t] := {1, . . . , t}.
Our paper is structured as follows. We briefly review BMRM in Section 2. Two types of lower
bounds are subsequently defined in Section 3, and Section 4 contains descriptions of various counter
examples that we construct. In Section
? 5 we describe an algorithm which provably converges to an
accurate solution of (1) in O(1/ ) iterations under assumption A1. The paper concludes with a
discussion and outlook in Section 6. Technical proofs and a ready reckoner of the convex analysis
concepts used in the paper can be found in [7, Appendix A].
2
BMRM
At every iteration, BMRM replaces Remp by a piecewise linear lower bound Rkcp and optimizes [2]
?
min Jk (w) := kwk2 + Rkcp (w), where Rkcp (w) := max hw, ai i + bi ,
(3)
w
1?i?k
2
to obtain the next iterate wk . Here ai ? ?Remp (wi?1 ) denotes an arbitrary subgradient of Remp
at wi?1 and bi = Remp (wi?1 ) ? hwi?1 , ai i. The piecewise linear lower bound is successively
tightened until the gap
k := min J(wt ) ? Jk (wk )
(4)
0?t?k
falls below a predefined tolerance .
Since Jk in (3) is a convex objective function, one can compute its dual. Instead of minimizing Jk
with respect to w one can equivalently maximize the dual [2] over the k dimensional simplex:
1
where ? ? ?k ,
(5)
Dk (?) = ? kAk ?k2 + hbk , ?i ,
2?
2
Because of the specialized nature of these solvers, lower bounds for general convex optimizers such as
those studied by Nesterov [4] and Nemirovski and Yudin [5] do not apply.
2
Algorithm 1: qp-bmrm: solving the inner loop Algorithm 2: ls-bmrm: solving the inner loop
of BMRM exactly via full QP.
of BMRM approximately via line search.
k
k
Require: Previous subgradients {ai }i=1 and
k
intercepts {bi }i=1 .
1: Set Ak := (a1 , . . . , ak ) , bk := (b1 , . . . bk )> .
Require: Previous subgradients {ai }i=1 and
k
intercepts {bi }i=1 .
1: Set Ak := (a1 , . . . , ak ) , bk := (b1 , . . . bk )> .
>
2: Set ?(?) := ??>
,1 ? ? .
k?1
2
3: ?k?argmax ?1
2? kAk ?(?)k +h?(?),bk i .
1
2: ?k ? argmax ? 2?
kAk ?k2 + h?, bk i .
???k
??[0,1]
>
4: ?k ? ?k ?>
.
k?1 , 1 ? ?k
?1
5: return wk = ?? Ak ?k .
3: return wk = ???1 Ak ?k .
and set ?k = argmax???k Dk (?). Note that Ak and bk in (5) are defined in Algorithm 1. Since
maximizing Dk (?) is a quadratic programming (QP) problem, we call this algorithm qp-bmrm.
Pseudo-code can be found in Algorithm 1.
Note that at iteration k the dual Dk (?) is a QP with k variables. As the number of iterations increases
the size of the QP also increases. In order to avoid the growing cost of the dual optimization at each
iteration, [2] proposed using a one-dimensional line search to calculate an approximate maximizer
>
?k on the line segment {(??>
k?1 , (1 ? ?)) : ? ? [0, 1]}, and we call this variant ls-bmrm. Pseudocode can be found in Algorithm 2. We refer the reader to [2] for details.
Even though qp-bmrm solves a more expensive optimization problem Dk (?) per iteration, Teo
et al. [2] could only show that both variants of BMRM converge at O(1/) rates:
Theorem 1 ([2]) Suppose assumption A2 holds. Then for any < 4G2 /?, both ls-bmrm and qpbmrm converge to an accurate solution of (1) as measured by (4) after at most the following
number of steps:
?J(0) 8G2
+
? 1.
log2
G2
?
Generality of BMRM Thanks to the formulation in (3) which only uses Remp , BMRM is applicable to a wide variety of Remp . For example, when used to train binary SVMs with Remp specified by
(1), it yields exactly the SVM-Perf algorithm [1]. When applied to optimize the multivariate score,
e.g. F1 -score with Remp specified by (2), it immediately leads to the optimizer given by [3].
3
Upper and Lower Bounds
Since most rates of convergence discussed in the machine learning community are upper bounds,
it is important to rigorously define the meaning of a lower bound with respect to , and to study
its relationship with the upper bounds. At this juncture it is also important to clarify an important
technical point. Instead of minimizing the objective function J(w) defined in (1), if we minimize a
scaled version cJ(w) this scales the approximation gap (4) by c. Assumptions such as A1 and A2
fix this degree of freedom by bounding the scale of the objective function.
Given a function f ? F and an optimization algorithm A, suppose {wk } are the iterates produced
by the algorithm A when minimizing f . Define T (; f, A) as the first step index k when wk becomes
an accurate solution3 :
T (; f, A) = min {k : f (wk ) ? minw f (w) ? } .
(6)
Upper and lower bounds are both properties for a pair of F and A. A function g() is called an
upper bound of (F, A) if for all functions f ? F and all > 0, it takes at most order g() steps for
A to reduce the gap to less than , i.e.,
(UB)
? > 0, ? f ? F, T (; f, A) ? g().
(7)
On the other hand, lower bounds can be defined in two different ways depending on how the above
two universal qualifiers are flipped to existential qualifiers.
3
The initial point also matters, as in the best case we can just start from the optimal solution. Thus the quantity of interest is actually T (; f, A) := maxw0 min{k : f (wk ) ? minw f (w) ? , starting point being w0 }.
However, without loss of generality we assume some pre-specified way of initialization.
3
Algorithms
ls-bmrm
qp-bmrm
Nesterov
UB
Assuming A1
SLB
Assuming A2
UB
SLB
WLB
WLB
O(1/)
O(1/)
?
O(1/ )
?(1/)
open
?
?(1/ )
?(1/)
open
?
?(1/ )
O(1/)
O(1/)
n/a
?(1/)
open
n/a
?(1/)
?(1/)
n/a
Table 1: Summary of the known upper bounds and our lower bounds. Note: A1 ? A2, but not vice
versa. SLB ? WLB, but not vice versa. UB is tight, if it matches WLB.
? Strong lower bounds (SLB) h() is called a SLB of (F, A) if there exists a function f? ? F,
such that for all > 0 it takes at least h() steps for A to find an accurate solution of f?:
(SLB)
? f? ? F, s.t. ? > 0, T (; f?, A) ? h().
(8)
? Weak lower bound (WLB) h() is called a WLB of (F, A) if for any > 0, there exists a
function f ? F depending on , such that it takes at least h() steps for A to find an accurate
solution of f :
(WLB)
? > 0, ? f ? F, s.t. T (; f , A) ? h().
(9)
Clearly, the existence of a SLB implies a WLB. However, it is usually much harder to establish SLB
than WLB. Fortunately, WLBs are sufficient to refute upper bounds or to establish their tightness.
The size of the function class F affects the upper and lower bounds in opposite ways. Suppose
F 0 ? F. Proving upper (resp. lower) bounds on (F 0 , A) is usually easier (resp. harder) than proving
upper (resp. lower) bounds for (F, A).
4
Constructing Lower Bounds
Letting the minimizer of J(w) be w? , we are interested in bounding the primal gap of the iterates
wk : J(wk ) ? J(w? ). Datasets will be constructed explicitly whose resulting objective J(w) will
be shown to attain the lower bounds of the algorithms. The Remp for both the hinge loss in (1)
and the F1 -score in (2) will be covered, and our results are summarized in Table 1. Note that as
assumption A1 implies A2 and SLB implies WLB, some entries of the table imply others.
4.1
Strong Lower Bounds for Solving Linear SVMs using ls-bmrm
We first prove the ?(1/) lower bound for ls-bmrm on SVM problems under assumption A1. Consider a one dimensional training set with four examples: (x1 , y1 ) = (?1,?1), (x2 , y2 ) = (? 21 ,?1),
1
(x3 , y3 ) = ( 12 , 1), (x4 , y4 ) = (1, 1). Setting ? = 16
, the regularized risk (1) can be written as (using
w instead of w as it is now a scalar):
1 2 1h
wi
1
min J(w) =
w +
1?
+ [1 ? w]+ .
(10)
w?R
32
2
2 + 2
The minimizer of J(w) is w? = 2, which can be verified
by the fact that 0 is in the subdifferential
2
of J at w? : 0 ? ?J(2) = 16
? 12 12 ? : ? ? [0, 1] . So J(w? ) = 81 . Choosing w0 = 0, we have
Theorem 2 limk?? k (J(wk ) ? J(w? )) = 41 , i.e. J(wk ) converges to J(w? ) at 1/k rate.
The proof relies on two lemmata. The first shows that the iterates generated by ls-bmrm on J(w)
satisfy the following recursive relations.
Lemma 3 For k ? 1, the following recursive relations hold true
w2k+1 = 2 +
?2k+1,1 =
8?2k?1,1 (w2k?1 ? 4?2k?1,1 )
> 2,
w2k?1 (w2k?1 + 4?2k?1,1 )
2
2
w2k?1
+ 16?2k?1,1
(w2k?1 + 4?2k?1,1 )
2 ?2k?1,1 , where
4
and
w2k = 2 ?
8?2k?1,1
? (1, 2). (11)
w2k?1
?2k+1,1 is the first coordinate of ?2k+1 .
(12)
The proof is lengthy and is available at [7, Appendix B]. These recursive relations allow us to derive
the convergence rate of ?2k?1,1 and wk (see proof in [7, Appendix C]):
Lemma 4 limk?? k?2k?1,1 = 14 . Combining with (11), we get limk?? k|2 ? wk | = 2.
Now that wk approaches 2 at the rate of O(1/k), it is finally straightforward to translate it into the
rate at which J(wk ) approaches J(w? ). See the proof of Theorem 2 in [7, Appendix D].
4.2
Weak Lower Bounds for Solving Linear SVMs using qp-bmrm
Theorem 1 gives an upper bound on the convergence rate of qp-bmrm, assuming that Remp satisfies
the assumption A2. In this section we further demonstrate that this O(1/) rate is also a WLB (hence
tight) even when the Remp is specialized to SVM objectives satisfying A2.
n
Given > 0, define n = d1/e and construct a dataset {(xi , yi )}i=1 as yi = (?1)i and xi =
?
(?1)i (nei+1 + ne1 ) ? Rn+1 . Then the corresponding objective function (1) is
J(w) =
n
n
2
?
kwk
1X
1X
+Remp (w), where Remp (w) =
[1?yi hw, xi i]+ =
[1? nw1 ?nwi+1 ]+ .
2
n i=1
n i=1
(13)
1
It is easy to see that the minimizer w? = 12 ( ?1n , n1 , n1 , . . . , n1 )> and J(w? ) = 4n
. In fact, simply
>
Pn
check that yi hw? , xi i = 1, so ?J(w? ) = w? ? ?1n i=1 ?i , ?1 , . . . , ?n
: ?i ? [0, 1] , and
setting all ?i =
1
2n
yields the subgradient 0. Our key result is the following theorem.
Theorem 5 Let w0 = ( ?1n , 0, 0, . . .)> . Suppose running qp-bmrm on the objective function (13)
2
produces iterates w1 , . . . , wk , . . .. Then it takes qp-bmrm at least 3
steps to find an accurate
solution. Formally,
1
1
2
min J(wi ) ? J(w? ) =
+
for all k ? [n], hence min J(wi ) ? J(w? ) > for all k < .
2k 4n
3
i?[k]
i?[k]
Indeed, after taking n steps, wn will cut a subgradient an+1 = 0 and bn+1 = 0, and then the
minimizer of Jn+1 (w) gives exactly w? .
Pn
Proof Since Remp (w0 ) = 0 and ?Remp (w0 ) = ?1
i=1 ?i yi xi : ?i ? [0, 1] , we can choose
n
>
1
1
1
1
a1 = ? y1 x1 = ? ? , ?1, 0, . . .
, b1 = Remp (w0 ) ? ha1 , w0 i = 0 + = , and
n
n
n
n
>
1
1
1
1
2
kwk ? ? w1 ? w2 +
= ? , 1, 0, . . .
.
w1 = argmin
2
n
n
n
w
In general, we claim that the k-th iterate wk produced by qp-bmrm is given by
k copies
!>
z }| {
1 1
1
wk = ? , , . . . , , 0, . . .
.
k
n k
We prove this claim by induction on k. Assume the claim
holds true for steps 1, . . . , k,
then it is
P
n
easy to check that Remp (wk ) = 0 and ?Remp (wk ) = ?1
?
y
x
:
?
?
[0,
1]
. Thus we
i
i
i
i
i=k+1
n
can again choose
1
1
ak+1 = ? yk+1 xk+1 ,
and
bk+1 = Remp (wk ) ? hak+1 , wk i = , so
n
n
k+1 copies
!>
}|
{
1
1
1
wk+1 = argmin
= ? ,
,...,
, 0, . . .
,
k+1
n k+1
w
n
o
P
which can be verified by checking that ?Jk+1 (wk+1 ) = wk+1 + i?[k+1] ?i ai : ? ? ?k+1 3
1
2
kwk + max {hai , wi + bi }
1?i?k+1
2
0. All that remains is to observe that J(wk ) =
1
1
that J(wk ) ? J(w? ) = 2k
+ 4n
as claimed.
1
2k
5
+
1
2n
z
while J(w? ) =
1
4n
from which it follows
?
As an aside, the subgradient of the Remp in (13) does have Euclidean norm 2n at w = 0. However,
in the above run of qp-bmrm, ?Remp (w0 ), . . . ,
?Remp (w
n ) always contains a subgradient with
norm 1. So if we restrict the feasible region to n?1/2 ? [0, ?]n , then J(w) does satisfy the
assumption A2 and the optimal solution does not change. This is essentially a local satisfaction of
A2. In fact, having a bounded subgradient of Remp at all wk is sufficient for qp-bmrm to converge
at the rate in Theorem 1.
However when we assume A1 which is more restrictive than A2, it remains an open question to
determine whether the O(1/) rates are optimal for qp-bmrm on SVM objectives. Also left open is
the SLB for qp-bmrm on SVMs.
4.3
Weak Lower Bounds for Optimizing F1 -score using qp-bmrm
2a
F1 -score is defined by using the contingency table: F1 (?
y, y) := 2a+b+c
.
n
Given > 0, define n = d1/e+1 and construct a dataset {(xi , yi )}i=1 as
n
follows: xi = ? 2?
e ? n2 ei+1 ? Rn+1 with yi = ?1 for all i ? [n?1],
3 1
?
y? = 1
y? = ?1
y = 1 y = ?1
a
b
c
d
and xn = 23n e1 + n2 en+1 ? Rn+1 with yn = +1. So there is only one
Contingency table.
positive training example. Then the corresponding objective function is
"
#
n
1X
1
2
?) +
yi hw, xi i (yi y?i ? 1) .
(14)
J(w) = kwk + max 1 ? F1 (y, y
?
y
2
n i=1
1
steps to find an accurate solution.
Theorem 6 Let w0 = ?13 e1 . Then qp-bmrm takes at least 3
1 1
1
1
J(wk )?min J(w) ?
?
?k ? [n?1], hence min J(wi )?min J(w) > ?k < .
w
w
2 k n?1
3
i?[k]
Proof A rigorous proof can be found in [7, Appendix E], we provide a sketch here. The crux is to
show
k copies
wk =
!>
z }| {
1 1
1
? , , . . . , , 0, . . .
k
3 k
?k ? [n ? 1].
(15)
We prove (15) by induction. Assume it holds for steps 1, . . . , k. Then at step k + 1 we have
?
? 1 + 1 if i ? [k]
?
? 6 2k
1
yi hwk , xi i = 16
if k + 1 ? i ? n ? 1 .
?
n
?
?1
if i = n
2
For convenience, define the term in the max in (14) as
n
1X
?) +
yi hwk , xi i (yi y?i ? 1).
?k (?
y) := 1 ? F1 (y, y
n i=1
(16)
? (among others) maximize ?k : a)
Then it is not hard to see that the following assignments of y
correct labeling, b) only misclassify the positive training example xn (i.e., y?n = ?1), c) only
misclassify one negative training example in xk+1 , . . . , xn?1 into positive. And ?k equals 0 at all
? misclassifies the positive training example,
these assignments. For a proof, consider two cases. If y
? ) = 0 and by (16) we have
then F1 (y, y
n?1
k
n?1
1X
1
k + 3X
1 X
?k (?
y) = 1?0 +
yi hwk , xi i (yi y?i ?1)+ (?1?1) =
(yi y?i ?1)+
(yi y?i ?1) ? 0.
n i=1
2
6k i=1
6
i=k+1
? correctly labels the positive example, but misclassifies t1 examples in x1 , . . . , xk and t2
Suppose y
? ) = 2+t21 +t2 , and
examples in xk+1 , . . . , xn?1 (into positive). Then F1 (y, y
k
n?1
2
1
1 X
1 X
+
+
(yi y?i ? 1) +
(yi y?i ? 1)
?k (?
y) = 1 ?
2 + t1 + t2
6 2k i=1
6
i=k+1
t1 + t2
1 1
1
t ? t2
=
?
+
t1 ? t2 ?
? 0 (t := t1 + t2 ).
2 + t1 + t2
3 k
3
3(2 + t)
6
k copies
n?k?1 copies
z }| {
z }| {
? as (?1, . . . , ?1, +1, ?1, . . . , ?1, +1)> which only misclassifies xk+1 , and get
So we can pick y
ak+1 =
?2
1
yk+1 xk+1 = ? ? e1 ? ek+2 ,
n
3
bk+1 = Remp (wk ) ? hak+1 , wk i = 0 +
:=Jk+1 (w)
1
1
= ,
3
3
k+1 copies
!>
z
}|
{
}|
{
z
1
1
1
1
2
.
wk+1 = argmin kwk + max {hai , wi + bi } = ? ,
,...,
, 0, . . .
2
k+1
i?[k+1]
w
3 k+1
n
o
Pk+1
which can be verified by ?Jk+1 (wk+1 ) = wk+1 + i=1 ?i ai : ? ? ?k+1 3 0 (just set all
?i =
1
k+1 ).
So (15) holds for step k + 1. End of induction.
1
All that remains is to observe that J(wk ) = 12 ( 13 + k1 ) while minw J(w) ? J(wn?1 ) = 21 ( 13 + n?1
)
1
1 1
from which it follows that J(wk ) ? minw J(w) ? 2 ( k ? n?1 ) as claimed in Theorem 6.
5
?
An O(nd/ ) Algorithm for Training Binary Linear SVMs
The lower bounds we proved above show that CPM such as BMRM require ?(1/) iterations to
converge. We now show that this is an inherent limitation of CPM and not an artifact of the problem.
To demonstrate this, we?will show that one can devise an algorithm for problems (1) and (2) which
will converge in O(1/ ) iterations. The key difficulty stems from the non-smoothness of the
objective function, which renders second and higher order algorithms such as L-BFGS inapplicable.
However, thanks to [7, Theorem 7 in Appendix A], the Fenchel dual of (1) is a convex smooth
function with a Lipschitz continuous gradient, which are easy to optimize.
To formalize the idea of using the Fenchel dual, we can abstract from the objectives (1) and (2) a
composite form of objective functions used in machine learning with linear models:
min J(w) = f (w) + g ? (Aw),
where Q1 is a closed convex set.
w?Q1
(17)
Here, f (w) is a strongly convex function corresponding to the regularizer, Aw stands for the output
of a linear model, and g ? encodes the empirical risk measuring the discrepancy between the correct
labels and the output of the linear model. Let the domain of g be Q2 . It is well known that [e.g. 8,
Theorem 3.3.5] under some mild constraint qualifications, the adjoint form of J(w):
D(?) = ?g(?) ? f ? (?A> ?),
? ? Q2
(18)
satisfies J(w) ? D(?) and inf w?Q1 J(w) = sup??Q2 D(?).
Example 1: binary SVMs with bias. Let A := ?Y X > where Y := diag(y1 , . . . , yn ) and X :=
Pn
2
?
1
?
(x1 , . . . , xP
n ), f (w) = 2 kwk , g (u) = minb?R n
i=1 [1 + ui ? yi b]+ which corresponds to
g(?) = ? i ?i . Then the adjoint form turns out to be the well known SVM dual objective function:
n
o
X
X
1 >
? Y X > XY ?, ? ? Q2 = ? ? [0, n?1 ]n :
yi ?i = 0 .
(19)
D(?) =
?i ?
2?
i
i
? -th row is
Example 2: multivariate scores. Denote A as a 2n -by-d matrix where the y
Pn
n
2
?
>
?
?
? ) + n1 uy?
x
(?
y
?
y
)
for
each
y
?
{?1,
+1}
kwk
,
f
(w)
=
,
g
(u)
=
max
?(y,
y
?
i
i
y
i=1 i
2
P
? )?y? , we recover the primal objective (2) for multiwhich corresponds to g(?) = ?n y? ?(y, y
variate performance measure. Its adjoint form is
n
X
X
n
1
1o
? )?y? , ? ? Q2 = ? ? [0, n?1 ]2 :
D(?) = ? ?> AA> ? + n
?(y, y
?y? =
. (20)
2?
n
?
?
y
y
In a series of papers [6, 9, 10], Nesterov developed optimal gradient based methods for minimizing
the composite objectives with primal (17) and adjoint (18). A sequence of wk and ?k is produced
such that under?
assumption A1 the duality gap J(wk ) ? D(?k ) is reduced to less than after at
most k = O(1/ ) steps. We refer the readers to [9, 11] for details.
7
5.1
Efficient Projections in Training SV Models with Optimal Gradient Methods
However, applying Nesterov?s algorithm is challenging, because it requires an efficient subroutine
for computing projections onto the set of constraints Q2 . This projection can be either an Euclidean
projection or a Bregman projection.
Example 1: binary SVMs with bias. In this case we need to compute the Euclidean projection to
Q2 defined by (19), which entails solving a Quadratic Programming problem with a diagonal Hessian, many box constraints, and a single equality constraint. We present an O(n) algorithm for
this task in [11, Section 5.5.1]. Plugging this into the algorithm described in [9] and noting ?
that
all intermediate steps of the algorithm can be computed in O(nd) time directly yield a O(nd/ )
algorithm. More detailed description of the algorithm is available in [11].
Example 2: multivariate scores. Since the dimension of Q2 in (20) is exponentially large in
n, Euclidean projection is intractable and we resort to Bregman projection. Given a differentiable
convex function F on Q2 , a point ?, and a direction g, we can define the Bregman projection as:
? ? h?F (?) ? g, ?i
? .
V (?, g) := argmin F (?)
?
??Q
2
P
Scaling up ? by a factor of n, we can choose F (?) as the negative entropy F (?) = ? i ?i log ?i .
Then the applicationof the algorithm
a distribution over all possible labelings:
X in [9] will endow
p(?
y; w) ? exp c?(?
y, y) +
ai hxi , wi y?i , where c and ai are constant scalars.
(21)
i
P
The solver will request the expectation Ey? [ i ai xi y?i ] which in turn requires that marginal distribution of p(?
yi ). This is not as straightforward as in graphical models because ?(?
y, y) may not
decompose. Fortunately, for multivariate scores defined by contingency tables, it is possible to compute the marginals in O(n2 ) time by using dynamic programming, and this cost is similar to the
algorithm proposed by [3]. The detail of the dynamic programming is given in [11, Section 5.4].
6
Outlook and Conclusion
CPM are widely employed in machine learning especially in the context of structured prediction
[12]. While upper bounds on their rates of convergence were known, lower bounds were not studied
before. In this paper we set out to fill this gap by exhibiting counter examples in binary classification
on which CPM require ?(1/) iterations. Our examples are substantially different from the one in
[13] which requires an increasing number of classes. The ?(1/) lower bound is a fundamental lim?
itation of these algorithms and not an artifact of the problem. We show this by devising an O(1/ )
algorithm borrowing techniques from [9]. However, this algorithm assumes that
? the dataset is contained in a ball of bounded radius (assumption A1 Section 1). Devising a O(1/ ) algorithm under
the less restrictive assumption A2 remains an open problem.
It is important
to note that the linear time algorithm in [11, Section 5.5.1] is the key to obtaining a
?
O(nd/ ) computational complexity for binary SVMs with bias mentioned in Section 5.1. However, this method has been rediscovered independently by many authors (including us), with the
earliest known reference to the best of our knowledge being [14] in 1990. Some recent work in
optimization [15] has focused on improving the practical performance, while in machine learning
[16] gave an expected linear time algorithm via randomized median finding.
Choosing an optimizer for a given machine learning task is a trade-off between a number of potentially conflicting requirements. CPM are one popular choice but there are others. If one is interested
in classification accuracy alone, without requiring deterministic guarantees, then online to batch
conversion techniques combined with stochastic subgradient descent are a good choice [17]. While
the dependence on is still ?(1/) or worse [18], one gets bounds independent of n. However, as
we pointed out earlier, these algorithms are applicable only when the empirical risk decomposes
over the examples.
On the other hand, one can employ coordinate descent in the dual as is done in the Sequential Minimal Optimization (SMO) algorithm of [19]. However, as [20] show, if the kernel matrix obtained by
stacking xi into a matrix X and X > X is not strictly positive definite, then SMO requires O(n/)
iterations with each iteration costing O(nd) effort. However, when the kernel matrix is strictly positive definite, then one can obtain an O(n2 log(1/)) bound on the number of iterations, which has
better dependence on , but is prohibitively expensive for large n. Even better dependence on can
be achieved by using interior point methods [21] which require only O(log(log(1/)) iterations, but
the time complexity per iteration is O(min{n2 d, d2 n}).
8
References
[1] T. Joachims. Training linear SVMs in linear time. In Proc. ACM Conf. Knowledge Discovery
and Data Mining (KDD), pages 217?226, 2006.
[2] C. H. Teo, S. V. N. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized
risk minimization. J. Mach. Learn. Res., 11:311?365, January 2010.
[3] T. Joachims. A support vector method for multivariate performance measures. In Proc. Intl.
Conf. Machine Learning, pages 377?384, 2005.
[4] Y. Nesterov. Introductory Lectures On Convex Optimization: A Basic Course. Springer, 2003.
[5] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization.
John Wiley and Sons, 1983.
[6] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O(1/k 2 ). Soviet Math. Docl., 269:543?547, 1983.
[7] Xinhua Zhang, Ankan Saha, and S.V.N. Vishwanathan.
Lower bounds on rate
of convergence of cutting plane methods (long version).
Technical report, 2010.
http://www.stat.purdue.edu/?vishy/papers/ZhaSahVis10 long.pdf.
[8] J. M. Borwein and A. S. Lewis. Convex Analysis and Nonlinear Optimization: Theory and
Examples. CMS books in Mathematics. Canadian Mathematical Society, 2000.
[9] Y. Nesterov. Excessive gap technique in nonsmooth convex minimization. SIAM Journal on
Optimization, 16(1):235?249, 2005. ISSN 1052-6234.
[10] Y. Nesterov. Gradient methods for minimizing composite objective function. Technical Report 76, CORE Discussion Paper, UCL, 2007.
[11] Xinhua Zhang, Ankan Saha, and S.V.N. Vishwanathan. Regularized risk minimization by Nesterov?s accelerated gradient methods: Algorithmic extensions and empirical studies. Technical
report arXiv:1011.0472, 2010. http://arxiv.org/abs/1011.0472.
[12] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured
and interdependent output variables. J. Mach. Learn. Res., 6:1453?1484, 2005.
? Yu. Cutting-plane training of structural SVMs. Machine
[13] T. Joachims, T. Finley, and C.N.J.
Learning Journal, 77(1):27?59, 2009.
[14] P. M. Pardalos and N. Kovoor. An algorithm for singly constrained class of quadratic programs
subject to upper and lower bounds. Mathematical Programming, 46:321?328, 1990.
[15] Y.-H. Dai and R. Fletcher. New algorithms for singly linearly constrained quadratic programs
subject to lower and upper bounds. Mathematical Programming: Series A and B archive, 106
(3):403?421, 2006.
[16] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1 -ball
for learning in high dimensions. In Proc. Intl. Conf. Machine Learning, pages 272?279, 2008.
[17] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for SVM. In Proc. Intl. Conf. Machine Learning, pages 807?814, 2007.
[18] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. Wainwright. Information-theoretic lower
bounds on the oracle complexity of convex optimization. In Neural Information Processing
Systems, pages 1?9, 2009.
[19] J. C. Platt. Sequential minimal optimization: A fast algorithm for training support vector
machines. Technical Report MSR-TR-98-14, Microsoft Research, 1998.
[20] N. List and H. U. Simon. SVM-optimization and steepest-descent line search. In S. Dasgupta
and A. Klivans, editors, Proc. Annual Conf. Computational Learning Theory, 2009.
[21] M. C. Ferris and T. S. Munson. Interior-point methods for massive support vector machines.
SIAM Journal on Optimization, 13(3):783?804, 2002.
9
| 4144 |@word mild:1 kgk:1 msr:1 version:3 briefly:1 norm:4 nd:9 open:7 d2:1 bn:1 q1:3 pick:1 tr:1 outlook:2 harder:2 initial:1 contains:2 score:10 series:2 past:1 must:1 written:1 john:1 slb:10 chicago:1 kdd:1 hofmann:1 aside:1 alone:1 devising:2 plane:7 xk:6 steepest:1 core:1 iterates:4 math:1 org:1 zhang:3 mathematical:3 constructed:1 prove:3 introductory:1 inside:1 expected:1 indeed:1 growing:1 alberta:1 solver:5 increasing:2 becomes:1 confused:1 bounded:3 suffice:1 notation:1 argmin:4 cm:1 substantially:1 q2:9 developed:1 finding:2 guarantee:2 pseudo:1 every:1 y3:1 exactly:3 prohibitively:1 k2:2 scaled:1 platt:1 control:1 yn:3 positive:8 t1:6 before:1 local:1 qualification:1 mach:2 ak:9 becoming:1 approximately:1 might:1 initialization:1 studied:2 challenging:1 nemirovski:2 bi:6 docl:1 uy:1 practical:3 recursive:3 definite:2 x3:1 optimizers:2 area:1 empirical:5 universal:1 significantly:1 adapting:1 attain:1 composite:3 word:1 pre:1 refers:3 projection:10 altun:1 get:3 convenience:1 onto:2 interior:2 tsochantaridis:1 pegasos:1 risk:11 applying:1 intercept:2 context:1 optimize:4 www:1 deterministic:1 maximizing:1 straightforward:2 starting:1 l:7 convex:12 independently:1 focused:1 decomposable:2 immediately:1 fill:1 proving:3 coordinate:4 resp:3 suppose:5 ualberta:1 massive:1 programming:6 us:1 expensive:2 jk:7 satisfying:1 cut:1 labeled:1 observed:1 calculate:1 region:1 munson:1 counter:4 trade:2 yk:2 mentioned:1 ui:1 complexity:4 xinhua:3 nesterov:9 rigorously:1 dynamic:2 solving:6 segment:1 tight:2 inapplicable:1 efficiency:1 various:1 regularizer:4 soviet:1 train:1 fast:2 describe:1 labeling:2 choosing:2 refined:1 shalev:2 whose:1 widely:2 solve:1 tightness:1 otherwise:1 ability:1 statistic:1 online:2 advantage:1 sequence:1 differentiable:1 t21:1 ucl:1 product:1 loop:2 combining:1 translate:1 adjoint:4 description:2 exploiting:1 convergence:12 requirement:1 intl:3 produce:1 converges:5 depending:2 derive:1 stat:2 measured:1 strong:2 solves:1 c:1 predicted:1 implies:3 exhibiting:1 direction:1 radius:2 correct:3 subsequently:1 stochastic:1 pardalos:1 require:5 hx:1 crux:1 f1:10 fix:1 decompose:2 refute:1 svmperf:1 strictly:2 clarify:1 hold:8 extension:1 exp:1 fletcher:1 algorithmic:1 claim:3 optimizer:3 a2:14 proc:5 applicable:3 cpm:16 label:3 teo:6 vice:2 minimization:8 clearly:3 always:1 avoid:1 pn:4 endow:1 earliest:1 joachim:7 check:2 rigorous:1 borrowing:1 relation:3 subroutine:1 interested:2 labelings:1 provably:1 dual:9 among:1 classification:2 misclassifies:3 constrained:2 special:2 marginal:1 equal:1 construct:4 having:1 x4:1 flipped:1 yu:1 excessive:1 discrepancy:2 simplex:2 report:4 others:3 fundamentally:1 inherent:2 employ:2 piecewise:2 saha:3 t2:8 nonsmooth:1 individual:1 argmax:3 n1:4 microsoft:1 ab:1 freedom:1 misclassify:2 interest:3 rediscovered:1 mining:1 analyzed:1 hwi:1 primal:4 devoted:1 bundle:2 predefined:1 accurate:11 bregman:3 partial:1 explosion:1 xy:1 disprove:1 minw:4 unless:1 phenomenal:1 euclidean:8 re:2 minimal:2 fenchel:2 earlier:1 measuring:1 assignment:2 cost:2 stacking:1 entry:1 qualifier:2 answer:1 aw:2 sv:1 kxi:1 combined:1 thanks:2 fundamental:1 randomized:1 siam:2 off:2 w1:3 again:1 borwein:1 w2k:8 successively:1 choose:3 worse:1 conf:5 book:1 ek:1 resort:1 return:2 bfgs:1 bold:1 wk:40 includes:1 summarized:1 matter:1 satisfy:2 explicitly:1 later:1 closed:1 kwk:10 sup:1 start:1 recover:1 simon:1 contribution:1 minimize:2 square:1 ni:1 accuracy:1 who:2 efficiently:1 yield:3 weak:3 produced:3 lengthy:1 vishy:2 proof:9 dataset:3 proved:1 popular:2 remp:33 lim:1 knowledge:2 cj:1 formalize:1 actually:1 higher:1 improved:2 formulation:1 done:1 though:1 strongly:1 generality:2 box:1 just:2 smola:1 until:1 hand:2 sketch:1 ei:2 nonlinear:2 maximizer:1 widespread:1 artifact:2 requiring:2 concept:1 y2:1 true:2 hence:4 equality:1 kak:3 pdf:1 theoretic:1 demonstrate:2 duchi:1 meaning:1 recently:1 specialized:4 pseudocode:1 qp:19 exponentially:1 discussed:1 marginals:1 kwk2:1 significant:1 refer:2 versa:2 ai:10 smoothness:1 rd:1 unconstrained:1 solution3:1 mathematics:1 pointed:1 had:1 dot:1 bmrm:36 hxi:1 access:1 entail:1 impressive:1 multivariate:9 recent:2 showed:2 conjectured:2 optimizes:2 driven:1 optimizing:1 inf:1 claimed:2 binary:7 success:1 yi:26 devise:3 fortunately:2 dai:1 employed:2 ey:1 converge:8 maximize:2 determine:1 full:1 stem:1 smooth:1 technical:6 faster:1 match:1 long:2 e1:3 award:2 ravikumar:1 a1:15 plugging:1 prediction:1 variant:2 regression:1 basic:1 essentially:1 expectation:1 chandra:1 arxiv:2 iteration:22 kernel:2 agarwal:1 achieved:1 subdifferential:1 median:1 w2:1 minb:1 limk:3 archive:1 subject:2 call:2 structural:1 noting:1 intermediate:1 canadian:1 easy:3 wn:2 iterate:2 variety:1 affect:1 variate:1 gave:1 restrict:1 opposite:1 inner:2 reduce:1 idea:1 ankan:3 whether:1 bartlett:1 vishwanthan:1 effort:3 render:1 hessian:1 covered:1 detailed:1 singly:2 svms:14 reduced:1 http:2 exist:1 estimated:1 per:3 correctly:1 dasgupta:1 key:4 four:1 xinhua2:1 costing:1 verified:3 subgradient:9 hak:2 sum:1 run:1 letter:1 fueled:1 reader:2 appendix:6 scaling:1 bound:41 replaces:1 quadratic:4 oracle:1 annual:1 vishwanathan:3 constraint:4 x2:1 nwi:1 encodes:1 speed:2 klivans:1 min:13 subgradients:2 conjecture:1 structured:3 developing:1 ball:3 request:1 increasingly:1 son:1 wi:14 restricted:1 heart:1 equation:2 remains:5 turn:2 needed:1 ordinal:1 letting:2 singer:2 end:1 available:2 tightest:1 ferris:1 apply:1 observe:2 batch:2 existence:1 hbk:1 jn:1 denotes:3 running:1 assumes:1 graphical:1 log2:1 hinge:2 restrictive:3 nw1:1 k1:1 establish:2 especially:1 society:1 objective:17 question:2 quantity:1 dependence:3 diagonal:1 hai:2 gradient:6 hwk:3 w0:9 induction:3 assuming:3 code:1 issn:1 index:1 relationship:1 y4:1 minimizing:5 equivalently:1 potentially:1 negative:2 tightening:1 upper:17 conversion:1 datasets:1 purdue:3 descent:3 january:1 y1:4 rn:3 wlb:11 arbitrary:1 nei:1 community:1 bk:9 pair:1 specified:4 optimized:1 smo:2 conflicting:1 below:1 usually:2 wi2:1 encompasses:1 program:2 max:8 including:1 wainwright:1 satisfaction:1 natural:1 difficulty:1 regularized:7 imply:1 numerous:1 concludes:1 perf:5 ready:1 finley:1 existential:1 review:1 interdependent:1 l2:1 checking:1 discovery:1 loss:5 lecture:1 prototypical:1 limitation:2 srebro:1 disproving:1 contingency:3 degree:1 sufficient:2 xp:1 tightened:2 editor:1 row:1 kovoor:1 course:1 summary:1 surprisingly:1 copy:6 bias:3 uchicago:1 understand:3 allow:1 fall:1 wide:1 taking:1 tolerance:1 ha1:1 dimension:2 xn:4 stand:1 yudin:2 author:1 made:1 approximate:1 cutting:7 b1:3 xi:19 shwartz:2 search:3 continuous:1 decade:1 decomposes:1 table:6 nature:1 learn:2 ca:1 inherently:1 obtaining:1 improving:1 constructing:1 domain:1 diag:1 pk:1 linearly:1 bounding:2 n2:5 x1:4 en:1 wiley:1 sub:1 winning:2 candidate:1 lie:1 hw:7 theorem:11 list:1 dk:5 svm:11 exists:3 intractable:1 sequential:2 juncture:1 margin:1 gap:7 easier:1 entropy:1 simply:1 contained:1 g2:3 scalar:2 springer:1 aa:1 corresponds:2 minimizer:4 satisfies:2 relies:1 acm:1 lewis:1 formulated:1 lipschitz:1 feasible:1 hard:3 change:1 except:1 wt:1 lemma:3 called:3 duality:1 ne1:1 formally:1 support:7 ub:4 accelerated:1 dept:4 d1:2 |
3,473 | 4,145 | Predicting Execution Time of Computer Programs
Using Sparse Polynomial Regression
Ling Huang
Intel Labs Berkeley
[email protected]
Byung-Gon Chun
Intel Labs Berkeley
[email protected]
Jinzhu Jia
UC Berkeley
[email protected]
Petros Maniatis
Intel Labs Berkeley
[email protected]
Bin Yu
UC Berkeley
[email protected]
Mayur Naik
Intel Labs Berkeley
[email protected]
Abstract
Predicting the execution time of computer programs is an important but challenging problem in the community of computer systems. Existing methods require experts to perform detailed analysis of program code in order to construct predictors
or select important features. We recently developed a new system to automatically
extract a large number of features from program execution on sample inputs, on
which prediction models can be constructed without expert knowledge. In this
paper we study the construction of predictive models for this problem. We propose the SPORE (Sparse POlynomial REgression) methodology to build accurate
prediction models of program performance using feature data collected from program execution on sample inputs. Our two SPORE algorithms are able to build
relationships between responses (e.g., the execution time of a computer program)
and features, and select a few from hundreds of the retrieved features to construct an explicitly sparse and non-linear model to predict the response variable.
The compact and explicitly polynomial form of the estimated model could reveal
important insights into the computer program (e.g., features and their non-linear
combinations that dominate the execution time), enabling a better understanding
of the program?s behavior. Our evaluation on three widely used computer programs shows that SPORE methods can give accurate prediction with relative error
less than 7% by using a moderate number of training data samples. In addition, we
compare SPORE algorithms to state-of-the-art sparse regression algorithms, and
show that SPORE methods, motivated by real applications, outperform the other
methods in terms of both interpretability and prediction accuracy.
1 Introduction
Computing systems today are ubiquitous, and range from the very small (e.g., iPods, cellphones,
laptops) to the very large (servers, data centers, computational grids). At the heart of such systems
are management components that decide how to schedule the execution of different programs over
time (e.g., to ensure high system utilization or efficient energy use [11, 15]), how to allocate to each
program resources such as memory, storage and networking (e.g., to ensure a long battery life or fair
resource allocation), and how to weather anomalies (e.g., flash crowds or attacks [6, 17, 24]).
These management components typically must make guesses about how a program will perform
under given hypothetical inputs, so as to decide how best to plan for the future. For example,
consider a simple scenario in a data center with two computers, fast computer A and slow computer
B, and a program waiting to run on a large file f stored in computer B. A scheduler is often faced
1
with the decision of whether to run the program at B, potentially taking longer to execute, but
avoiding any transmission costs for the file; or moving the file from B to A but potentially executing
the program at A much faster. If the scheduler can predict accurately how long the program would
take to execute on input f at computer A or B, he/she can make an optimal decision, returning
results faster, possibly minimizing energy use, etc.
Despite all these opportunities and demands, uses of prediction have been at best unsophisticated
in modern computer systems. Existing approaches either create analytical models for the programs
based on simplistic assumptions [12], or treat the program as a black box and create a mapping function between certain properties of input data (e.g., file size) and output response [13]. The success
of such methods is highly dependent on human experts who are able to select important predictors
before a statistical modeling step can take place. Unfortunately, in practice experts may be hard to
come by, because programs can get complex quickly beyond the capabilities of a single expert, or
because they may be short-lived (e.g., applications from the iPhone app store) and unworthy of the
attention of a highly paid expert. Even when an expert is available, program performance is often
dependent not on externally visible features such as command-line parameters and input files, but
on the internal semantics of the program (e.g., what lines of code are executed).
To address this problem (lack of expert and inherent semantics), we recently developed a new system [7] to automatically extract a large number of features from the intermediate execution steps of
a program (e.g., internal variables, loops, and branches) on sample inputs; then prediction models
can be built from those features without the need for a human expert.
In this paper, we propose two Sparse POlynomial REgression (SPORE) algorithms that use the
automatically extracted features to predict a computer program?s performance. They are variants of
each other in the way they build the nonlinear terms into the model ? SPORE-LASSO first selects
a small number of features and then entertains a full nonlinear polynomial expansion of order less
than a given degree; while SPORE-FoBa chooses adaptively a subset of the full expanded terms
and hence allows possibly a higher order of polynomials. Our algorithms are in fact new general
methods motivated by the computer performance prediction problem. They can learn a relationship
between a response (e.g., the execution time of a computer program given an input) and the generated
features, and select a few from hundreds of features to construct an explicit polynomial form to
predict the response. The compact and explicit polynomial form reveals important insights in the
program semantics (e.g., the internal program loop that affects program execution time the most).
Our approach is general, flexible and automated, and can adapt the prediction models to specific
programs, computer platforms, and even inputs.
We evaluate our algorithms experimentally on three popular computer programs from web search
and image processing. We show that our SPORE algorithms can achieve accurate predictions with
relative error less than 7% by using a small amount of training data for our application, and that our
algorithms outperform existing state-of-the-art sparse regression algorithms in the literature in terms
of interpretability and accuracy.
Related Work. In prior attempts to predict program execution time, Gupta et al. [13] use a variant of
decision trees to predict execution time ranges for database queries. Ganapathi et al. [11] use KCCA
to predict time and resource consumption for database queries using statistics on query texts and
execution plans. To measure the empirical computational complexity of a program, Trendprof [12]
constructs linear or power-law models that predict program execution counts. The drawbacks of such
approaches include their need for expert knowledge about the program to identify good features, or
their requirement for simple input-size to execution time correlations.
Seshia and Rakhlin [22, 23] propose a game-theoretic estimator of quantitative program properties,
such as worst-case execution time, for embedded systems. These properties depend heavily on the
target hardware environment in which the program is executed. Modeling the environment manually
is tedious and error-prone. As a result, they formulate the problem as a game between their algorithm
(player) and the program?s environment (adversary), where the player seeks to accurately predict the
property of interest while the adversary sets environment states and parameters.
Since expert resource is limited and costly, it is desirable to automatically extract features from program codes. Then machine learning techniques can be used to select the most important features
to build a model. In statistical machine learning, feature selection methods under linear regression models such as LASSO have been widely studied in the past decade. Feature selection with
2
non-linear models has been studied much less, but has recently been attracting attention. The most
notable are the SpAM work with theoretical and simulation results [20] and additive and generalized forward regression [18]. Empirical studies with data of these non-linear sparse methods are
very few [21]. The drawback of applying the SpAM method in our execution time prediction problem is that SpAM outputs an additive model and cannot use the interaction information between
features. But it is well-known that features of computer programs interact to determine the execution time [12]. One non-parametric modification of SpAM to replace the additive model has been
proposed [18]. However, the resulting non-parametric models are not easy to interpret and hence are
not desirable for our execution time prediction problem. Instead, we propose the SPORE methodology and propose efficient algorithms to train a SPORE model. Our work provides a promising
example of interpretable non-linear sparse regression models in solving real data problems.
2 Overview of Our System
Our focus in this paper is on algorithms for feature selection and model building. However we first
review the problem within which we apply these techniques to provide context [7]. Our goal is to
predict how a given program will perform (e.g., its execution time) on a particular input (e.g., input
files and command-line parameters). The system consists of four steps.
First, the feature instrumentation step analyzes the source code and automatically instruments it
to extract values of program features such as loop counts (how many times a particular loop has
executed), branch counts (how many times each branch of a conditional has executed), and variable
values (the k first values assigned to a numerical variable, for some small k such as 5).
Second, the profiling step executes the instrumented program with sample input data to collect values
for all created program features and the program?s execution times. The time impact of the data
collection is minimal.
Third, the slicing step analyzes each automatically identified feature to determine the smallest subset
of the actual program that can compute the value of that feature, i.e., the feature slice. This is the
cost of obtaining the value of the feature; if the whole program must execute to compute the value,
then the feature is expensive and not useful, since we can just measure execution time and we have
no need for prediction, whereas if only a little of the program must execute, the feature is cheap and
therefore possibly valuable in a predictive model.
Finally, the modeling step uses the feature values collected during profiling along with the feature
costs computed during slicing to build a predictive model on a small subset of generated features.
To obtain a model consisting of low-cost features, we iterate over the modeling and slicing steps,
evaluating the cost of selected features and rejecting expensive ones, until only low-cost features are
selected to construct the prediction model. At runtime, given a new input, the selected features are
computed using the corresponding slices, and the model is used to predict execution time from the
feature values.
The above description is minimal by necessity due to space constraints, and omits details on the
rationale, such as why we chose the kinds of features we chose or how program slicing works.
Though important, those details have no bearing in the results shown in this paper.
At present our system targets a fixed, overprovisioned computation environment without CPU job
contention or network bandwidth fluctuations. We therefore assume that execution times observed
during training will be consistent with system behavior on-line. Our approach can adapt to modest
change in execution environment by retraining on different environments. In our future research, we
plan to incorporate candidate features of both hardware (e.g., configurations of CPU, memory, etc)
and software environment (e.g., OS, cache policy, etc) for predictive model construction.
3 Sparse Polynomial Regression Model
Our basic premise for predictive program analysis is that a small but relevant set of features may explain the execution time well. In other words, we seek a compact model?an explicit form function
of a small number of features?that accurately estimates the execution time of the program.
3
To make the problem tractable, we constrain our models to the multivariate polynomial family, for at
least three reasons. First, a ?good program? is usually expected to have polynomial execution time in
some (combination of) features. Second, a polynomial model up to certain degree can approximate
well many nonlinear models (due to Taylor expansion). Finally, a compact polynomial model can
provide an easy-to-understand explanation of what determines the execution time of a program,
providing program developers with intuitive feedback and a solid basis for analysis.
For each computer program, our feature instrumentation procedure outputs a data set with n samples
as tuples of {yi , xi }ni=1 , where yi ? R denotes the ith observation of execution time, and xi denotes
the ith observation of the vector of p features. We now review some obvious alternative methods to
modeling the relationship between Y = [yi ] and X = [xi ], point out their drawbacks, and then we
proceed to our SPORE methodology.
3.1 Sparse Regression and Alternatives
Least square regression is widely used for finding the best-fitting f (x, ?) to a given set of responses
yi by minimizing the sum of the squares of the residuals [14]. Regression with subset selection
finds for each k ? {1, 2, . . . , m} the feature subset of size k that gives the smallest residual sum of
squares. However, it is a combinatorial optimization and is known to be NP-hard [14]. In recent
years a number of efficient alternatives based on model regularization have been proposed. Among
them, LASSO [25] finds the selected features with coefficients ?? given a tuning parameter ? as
follows:
X
1
?? = arg min kY ? X?k22 + ?
|?j |.
(1)
? 2
j
LASSO effectively enforces many ?j ?s to be 0, and selects a small subset of features (indexed by
non-zero ?j ?s) to build the model, which is usually sparse and has better prediction accuracy than
models created by ordinary least square regression [14] when p is large. Parameter ? controls the
complexity of the model: as ? grows larger, fewer features are selected.
Being a convex optimization problem is an important advantage of the LASSO method since several
fast algorithms exist to solve the problem efficiently even with large-scale data sets [9, 10, 16, 19].
Furthermore, LASSO has convenient theoretical and empirical properties. Under suitable assumptions, it can recover the true underlying model [8, 25]. Unfortunately, when predictors are highly
correlated, LASSO usually cannot select the true underlying model. The adaptive-LASSO [29]
defined below in Equation (2) can overcome this problem
1
?? = arg min kY ? X?k22 + ?
? 2
X ?j
| |,
wj
j
(2)
where wj can be any consistent estimate of ?. Here we choose wj to be a ridge estimate of ?:
wj = (X T X + 0.001I)?1 X T Y,
where I is the identity matrix.
Technically LASSO can be easily extended to create nonlinear models (e.g., using polynomial basis
functions up to degree d of all p features). However, this approach gives us p+d
terms, which is
d
very large when p is large (on the order of thousands) even for small d, making regression computationally expensive. We give two alternatives to fit the sparse polynomial regression model next.
3.2 SPORE Methodology and Two Algorithms
Our methodology captures non-linear effects of features?as well as non-linear interactions among
features?by using polynomial basis functions over those features (we use terms to denote the polynomial basis functions subsequently). We expand the feature set x = {x1 x2 . . . xk }, k ? p to
all the terms in the expansion of the degree-d polynomial (1 + x1 + . . . + xk )d , and use the terms
to construct a multivariate polynomial function f (x, ?) for the regression. We define expan(X, d)
as the mapping from the original data matrix X to a new matrix with the polynomial expansion
terms up to degree d as the columns. For example, using a degree-2 polynomial with feature set
4
x = {x1 x2 }, we expand out (1 + x1 + x2 )2 to get terms 1, x1 , x2 , x21 , x1 x2 , x22 , and use them as
basis functions to construct the following function for regression:
expan ([x1 , x2 ], 2) = [1, [x1 ], [x2 ], [x21 ], [x1 x2 ], [x22 ]],
f (x, ?)
=
?0 + ?1 x1 + ?2 x2 + ?3 x21 + ?4 x1 x2 + ?5 x22 .
Complete expansion on all p features is not necessary, because many of them have little contribution to the execution time. Motivated by this execution time application, we propose a general
methodology called SPORE which is a sparse polynomial regression technique. Next, we develop
two algorithms to fit our SPORE methodology.
3.2.1 SPORE-LASSO: A Two-Step Method
For a sparse polynomial model with only a few features, if we can preselect a small number of
features, applying the LASSO on the polynomial expansion of those preselected features will still
be efficient, because we do not have too many polynomial terms. Here is the idea:
Step 1: Use the linear LASSO algorithm to select a small number of features and filter out (often
many) features that hardly have contributions to the execution time.
Step 2: Use the adaptive-LASSO method on the expanded polynomial terms of the selected features
(from Step 1) to construct the sparse polynomial model.
Adaptive-LASSO is used in Step 2 because of the collinearity of the expanded polynomial features.
Step 2 can be computed efficiently if we only choose a small number of features in Step 1. We
present the resulting SPORE-LASSO algorithm in Algorithm 1 below.
Algorithm 1 SPORE-LASSO
Input: response Y , feature data X, maximum degree d, ?1 , ?2
Output: Feature index S, term index St , weights ?? for d-degree polynomial basis.
1: ?
? = arg min? 21 kY ? X?k22 + ?1 k?k1
2: S = {j : ?
? j 6= 0}
3: Xnew = expan(X(S), d)
T
T
4: w = (Xnew
Xnew + 0.001I)?1 Xnew
Y
P ?
1
2
5: ?? = arg min? 2 kY ? Xnew ?k2 + ?2 j | wjj |
6: St = {j : ??j 6= 0}
X(S) in Step 3 of Algorithm 1 is a sub-matrix of X containing only columns from X indexed by
S. For a new observation with feature vector X = [x1 , x2 , . . . , xp ], we first get the selected feature
vector X(S), then obtain the polynomial terms Xnew = expan(X(S), d), and finally we compute
? Note that the prediction depends on the choice of ?1 , ?2 and
the prediction: Y? = Xnew ? ?.
maximum degree d. In this paper, we fix d = 3. ?1 and ?2 are chosen by minimizing the Akaike
Information Criterion (AIC) on the LASSO solution paths. The AIC is defined as n log(kY ? Y? k22 )+
2s, where Y? is the fitted Y and s is the number of polynomial terms selected in the model. To be
precise, for the linear LASSO step (Step 1 of Algorithm 1), a whole solution path with a number of
?1 can be obtained using the algorithm in [10]. On the solution path, for each fixed ?1 , we compute
a solution path with varied ?2 for Step 5 of Algorithm 1 to select the polynomial terms. For each
?2 , we calculate the AIC, and choose the (?1 , ?2 ) with the smallest AIC.
One may wonder whether Step 1 incorrectly discards features required for building a good model
in Step 2. We next show theoretically this is not the case. Let S be a subset of {1, 2, . . . , p} and
its complement S c = {1, 2, . . . , p} \ S. Write the feature matrix X as X = [X(S), X(S c)]. Let
response Y = f (X(S)) + ?, where f (?) is any function and ? is additive noise. Let n be the number
of observations and s the size of S. We assume that X is deterministic, p and s are fixed, and ??i s are
i.i.d. and follow the Gaussian distribution with mean 0 and variance ? 2 . Our results also hold for
zero mean sub-Gaussian noise with parameter ? 2 . More general results regarding general scaling of
n, p and s can also be obtained.
Under the following conditions, we show that Step 1 of SPORE-LASSO, the linear LASSO, selects
the relevant features even if the response Y depends on predictors X(S) nonlinearly:
5
1. The columns (Xj , j = 1, . . . , p) of X are standardized:
1
T
n Xj Xj
= 1, for all j;
2. ?min ( n1 X(S)T X(S)) ? c with a constant c > 0;
3. min |(X(S)T X(S))?1 X(S)T f (X(S))| > ? with a constant ? > 0;
4.
T
T
?1
T
XS
XS
]f (XS )
c [I?XS (XS XS )
n
<
??c
?
,
2 s+1
for some 0 < ? < 1;
5. kXSTc XS (XST XS )?1 k? ? 1 ? ?;
where ?min (?) denotes the minimum eigenvalue of a matrix, kAk? is defined as maxi
and the inequalities are defined element-wise.
hP
j
i
|Aij |
Theorem 3.1. Under the conditions above, with probability ? 1 as n ? ?, there exists
some ?, such that ?? = (??S , ??S c ) is the unique solution of the LASSO (Equation (1)), where
??j 6= 0, for all j ? S and ??S c = 0.
Remark. The first two conditions are trivial: Condition 1 can be obtained by rescaling while Condition 2 assumes that the design matrix composed of the true predictors in the model is not singular.
Condition 3 is a reasonable condition which means that the linear projection of the expected response to the space spanned by true predictors is not degenerated. Condition 4 is a little bit tricky;
it says that the irrelevant predictors (XS c ) are not very correlated with the ?residuals? of E(Y ) after
its projection onto XS . Condition 5 is always needed when considering LASSO?s model selection
consistency [26, 28]. The proof of the theorem is included in the supplementary material.
3.2.2 Adaptive Forward-Backward: SPORE-FoBa
Using all of the polynomial expansions of a feature subset is not flexible. In this section, we propose
the SPORE-FoBa algorithm, a more flexible algorithm using adaptive forward-backward searching
over the polynomially expanded data: during search step k with an active set T (k) , we examine one
new feature Xj , and consider a small candidate set which consists of the candidate feature Xj , its
higher order terms, and the (non-linear) interactions between previously selected features (indexed
by S) and candidate feature Xj with total degree up to d, i.e., terms with form
X
dl ? d.
(3)
Xjd1 ?l?S Xldl , with d1 > 0, dl ? 0, and d1 +
Algorithm 2 below is a short description of the SPORE-FoBa, which uses linear FoBa [27] at step
5and 6. The main idea of SPORE-FoBa is that a term from the candidate set is added into the model
if and only if adding this term makes the residual sum of squares (RSS) decrease a lot. We scan all
of the terms in the candidate set and choose the one which makes the RSS drop most. If the drop in
the RSS is greater than a pre-specified value ?, we add that term to the active set, which contains the
currently selected terms by the SPORE-FoBa algorithm. When considering deleting one term from
the active set, we choose the one that makes the sum of residuals increase the least. If this increment
is small enough, we delete that term from our current active set.
Algorithm 2 SPORE-FoBa
Input: response Y , feature columns X1 , . . . , Xp , the maximum degree d
Output: polynomial terms and the weights
1: Let T = ?
2: while true do
3:
for j = 1, . . . , p do
4:
Let C be the candidate set that contains non-linear and interaction terms from Equation (3)
5:
Use Linear FoBa to select terms from C to form the new active set T .
6:
Use Linear FoBa to delete terms from T to form a new active set T .
7:
if no terms can be added or deleted then
8:
break
6
0.2
0.2
SPORE?LASSO
SPORE?FoBa
0.1
0.05
0
0
0.1
0.05
0.1
0.2
0.3
0.4
0.5
Percentage of Training data
(a) Lucene
0.6
0
0
SPORE?LASSO
SPORE?FoBa
0.15
Prediction Error
0.15
Prediction Error
Prediction Error
0.15
0.2
SPORE?LASSO
SPORE?FoBa
0.1
0.05
0.1
0.2
0.3
0.4
0.5
Percentage of Training data
(b) Find Maxima
0.6
0
0
0.1
0.2
0.3
0.4
0.5
Percentage of Training data
0.6
(c) Segmentation
Figure 1: Prediction errors of our algorithms across the three data sets varying training-set fractions.
4 Evaluation Results
We now experimentally demonstrate that our algorithms are practical, give highly accurate predictors for real problems with small training-set sizes, compare favorably in accuracy to other state-ofthe-art sparse-regression algorithms, and produce interpretable, intuitive models.
To evaluate our algorithms, we use as case studies three programs: the Lucene Search Engine [4],
and two image processing algorithms, one for finding maxima and one for segmenting an image
(both of which are implemented within the ImageJ image processing framework [3]). We chose
all three programs according to two criteria. First and most importantly, we sought programs with
high variability in the predicted measure (execution time), especially in the face of otherwise similar
inputs (e.g., image files of roughly the same size for image processing). Second, we sought programs
that implement reasonably complex functionality, for which an inexperienced observer would not
be able to trivially identify the important features.
Our collected datasets are as follows. For Lucene, we used a variety of text input queries from
two corpora: the works of Shakespeare and the King James Bible. We collected a data set with
n = 3840 samples, each of which consists of an execution time and a total of p = 126 automatically
generated features. The time values are in range of (0.88, 1.13) with standard deviation 0.19. For
the Find Maxima program within the ImageJ framework, we collected n = 3045 samples (from an
equal number of distinct, diverse images obtained from three vision corpora [1, 2, 5]), and a total of
p = 182 features. The execution time values are in range of (0.09, 2.99) with standard deviation
0.24. Finally, from the Segmentation program within the same ImageJ framework on the same image
set, we collected again n = 3045 samples, and a total of p = 816 features for each. The time values
are in range of (0.21, 58.05) with standard deviation 3.05. In all the experiments, we fix degree
d = 3 for polynomial expansion, and normalized each column of feature data into range [0, 1].
Prediction Error. We first show that our algorithms predict accurately, even when training on a
small number of samples, in both absolute and relative terms. The accuracy measure we use is the
P y?i ?yi
relative prediction error defined as n1t
| yi |, where nt is the size of the test data set, and y?i ?s
and yi ?s are the predicted and actual responses of test data, respectively.
We randomly split every data set into a training set and a test set for a given training-set fraction,
train the algorithms and measure their prediction error on the test data. For each training fraction,
we repeat the ?splitting, training and testing? procedure 10 times and show the mean and standard
deviation of prediction error in Figure 1. We see that our algorithms have high prediction accuracy,
even when training on only 10% or less of the data (roughly 300 - 400 samples). Specifically,
both of our algorithms can achieve less than 7% prediction error on both Lucene and Find Maxima
datasets; on the segmentation dataset, SPORE-FoBa achieves less than 8% prediction error, and
SPORE-LASSO achieves around 10% prediction error on average.
Comparisons to State-of-the-Art. We compare our algorithms to several existing sparse regression
methods by examining their prediction errors at different sparsity levels (the number of features used
in the model), and show our algorithms can clearly outperform LASSO, FoBa and recently proposed
non-parametric greedy methods [18] (Figure 2). As a non-parametric greedy algorithm, we use Additive Forward Regression (AFR), because it is faster and often achieves better prediction accuracy
than Generalized Forward Regression (GFR) algorithms. We use the Glmnet Matlab implementa7
tion of LASSO and to obtain the LASSO solution path [10]. Since FoBa and SPORE-FoBa naturally
produce a path by adding or deleting features (or terms), we record the prediction error at each step.
When two steps have the same sparsity level, we report the smallest prediction error. To generate
the solution path for SPORE-LASSO, we first use Glmnet to generate a solution path for linear
LASSO; then at each sparsity level k, we perform full polynomial expansion with d = 3 on the
selected k features, obtain a solution path on the expanded data, and choose the model with the
smallest prediction error among all models computed from all active feature sets of size k. From the
figure, we see that our SPORE algorithms have comparable performance, and both of them clearly
achieve better prediction accuracy than LASSO, FoBa, and AFR. None of the existing methods can
build models within 10% of relative prediction error. We believe this is because execution time of a
computer program often depends on non-linear combinations of different features, which is usually
not well-handled by either linear methods or the additive non-parametric methods. Instead, both of
our algorithms can select 2-3 high-quality features and build models with non-linear combinations
of them to predict execution time with high accuracy.
Prediction Error
Prediction Error
0.4
0.3
0.2
0.1
0
1
LASSO
FoBa
AFR
SPORE?LASSO
SPORE?FoBa
0.5
0.4
0.3
0.2
0.1
2
3
4
Sparsity
5
(a) Lucene
6
7
0
1
LASSO
FoBa
AFR
SPORE?LASSO
SPORE?FoBa
0.5
Prediction Error
LASSO
FoBa
AFR
SPORE?LASSO
SPORE?FoBa
0.5
0.4
0.3
0.2
0.1
2
3
4
Sparsity
5
(b) Find Maxima
6
7
0
1
2
3
4
Sparsity
5
6
7
(c) Segmentation
Figure 2: Performance of the algorithms: relative prediction error versus sparsity level.
Model Interpretability. To gain better understanding, we investigate the details of the model constructed by SPORE-FoBa for Find Maxima. Our conclusions are similar for the other case studies,
but we omit them due to space. We see that with different training set fractions and with different
sparsity configurations, SPORE-FoBa can always select two high-quality features from hundreds of
automatically generated ones. By consulting with experts of the Find Maxima program, we find that
the two selected features correspond to the width (w) and height (h) of the region of interest in the
image, which may in practice differ from the actual image width and height. Those are indeed the
most important factors for determining the execution time of the particular algorithm used. For a
10% training set fraction and ? = 0.01, SPORE-FoBa obtained
f (w, h) = 0.1 + 0.22w + 0.23h + 1.93wh + 0.24wh2
which uses non-linear feature terms(e.g., wh, wh2 ) to predict the execution time accurately (around
5.5% prediction error). Especially when Find Maxima is used as a component of a more complex
image processing pipeline, this model would not be the most obvious choice even an expert would
pick. On the contrary, as observed in our experiments, neither the linear nor the additive sparse
methods handle well such nonlinear terms, and result in inferior prediction performance. A more
detailed comparison across different methods is the subject of our on-going work.
5 Conclusion
In this paper, we proposed the SPORE (Sparse POlynomial REgression) methodology to build the
relationship between execution time of computer programs and features of the programs. We introduced two algorithms to learn a SPORE model, and showed that both algorithms can predict
execution time with more than 93% accuracy for the applications we tested. For the three test cases,
these results present a significant improvement (a 40% or more reduction in prediction error) over
other sparse modeling techniques in the literature when applied to this problem. Hence our work
provides one convincing example of using sparse non-linear regression techniques to solve real
problems. Moreover, the SPORE methodology is a general methodology that can be used to model
computer program performance metrics other than execution time and solve problems from other
areas of science and engineering.
8
References
[1] Caltech 101 Object Categories. http://www.vision.caltech.edu/Image_Datasets/
Caltech101/Caltech101.html.
[2] Event Dataset. http://vision.stanford.edu/lijiali/event_dataset/.
[3] ImageJ. http://rsbweb.nih.gov/ij/.
[4] Mahout. lucene.apache.org/mahout.
[5] Visual Object Classes Challenge 2008. http://pascallin.ecs.soton.ac.uk/challenges/
VOC/voc2008/.
[6] S. Chen, K. Joshi, M. A. Hiltunen, W. H. Sanders, and R. D. Schlichting. Link gradients: Predicting the
impact of network latency on multitier applications. In INFOCOM, 2009.
[7] B.-G. Chun, L. Huang, S. Lee, P. Maniatis, and M. Naik. Mantis: Predicting system performance through
program analysis and modeling. Technical Report, 2010. arXiv:1010.0019v1 [cs.PF].
[8] D. Donoho. For most large underdetermined systems of equations, the minimal 1-norm solution is the
sparsest solution. Communications on Pure and Applied Mathematics, 59:797829, 2006.
[9] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics,
32(2):407?499, 2002.
[10] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1), 2010.
[11] A. Ganapathi, H. Kuno, U. Dayal, J. L. Wiener, A. Fox, M. Jordan, and D. Patterson. Predicting multiple
metrics for queries: Better decisions enabled by machine learning. In ICDE, 2009.
[12] S. Goldsmith, A. Aiken, and D. Wilkerson. Measuring empirical computational complexity. In FSE,
2007.
[13] C. Gupta, A. Mehta, and U. Dayal. PQR: Predicting query execution times for autonomous workload
management. In ICAC, 2008.
[14] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2009.
[15] M. Isard, V. Prabhakaran, J. Currey, U. Wieder, K. Talwar, and A. Goldberg. Quincy: fair scheduling for
distributed computing clusters. In Proceedings of SOSP?09, 2009.
[16] S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large-scale
l1-regularized least squares. IEEE Journal on Selected Topics in Signal Processing, 1(4):606?617, 2007.
[17] Z. Li, M. Zhang, Z. Zhu, Y. Chen, A. Greenberg, and Y.-M. Wang. WebProphet: Automating performance
prediction for web services. In NSDI, 2010.
[18] H. Liu and X. Chen. Nonparametric greedy algorithm for the sparse learning problems. In NIPS 22, 2009.
[19] M. Osborne, B. Presnell, and B. Turlach. On the lasso and its dual. Journal of Computational and
Graphical Statistics, 9(2):319?337, 2000.
[20] P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman. Sparse additive models. Journal of the Royal
Statistical Society: Series B(Statistical Methodology), 71(5):1009?1030, 2009.
[21] P. Ravikumar, V. Vu, B. Yu, T. Naselaris, K. Kay, J. Gallant, and C. Berkeley. Nonparametric sparse hierarchical models describe v1 fmri responses to natural images. Advances in Neural Information Processing
Systems (NIPS), 21, 2008.
[22] S. A. Seshia and A. Rakhlin. Game-theoretic timing analysis. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 575?582. IEEE Press, Nov. 2008.
[23] S. A. Seshia and A. Rakhlin. Quantitative analysis of systems using game-theoretic learning. ACM
Transactions on Embedded Computing Systems (TECS), 2010. To appear.
[24] M. Tariq, A. Zeitoun, V. Valancius, N. Feamster, and M. Ammar. Answering what-if deployment and
configuration questions with wise. In ACM SIGCOMM, 2008.
[25] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 1996.
[26] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using l1-constrained
quadratic programming (Lasso). IEEE Trans. Information Theory, 55:2183?2202, 2009.
[27] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. Advances
in Neural Information Processing Systems, 22, 2008.
[28] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning Research,
7:2563, 2006.
[29] H. Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Association,
101(476):1418?1429, 2006.
9
| 4145 |@word collinearity:1 polynomial:37 norm:1 turlach:1 retraining:1 tedious:1 mehta:1 seek:2 simulation:1 r:3 paid:1 pick:1 solid:1 reduction:1 necessity:1 configuration:3 cellphone:1 contains:2 liu:2 series:1 past:1 existing:5 current:1 com:4 nt:1 must:3 visible:1 additive:8 numerical:1 shakespeare:1 cheap:1 drop:2 interpretable:2 icac:1 greedy:4 selected:13 guess:1 fewer:1 isard:1 xk:2 ith:2 short:2 record:1 provides:2 consulting:1 attack:1 org:1 zhang:2 height:2 along:1 constructed:2 mayur:2 consists:3 fitting:1 theoretically:1 indeed:1 expected:2 roughly:2 behavior:2 examine:1 nor:1 voc:1 automatically:8 gov:1 actual:3 little:3 cpu:2 cache:1 considering:2 pf:1 underlying:2 moreover:1 laptop:1 what:3 kind:1 developer:1 developed:2 finding:2 berkeley:9 quantitative:2 hypothetical:1 every:1 voc2008:1 runtime:1 returning:1 k2:1 tricky:1 utilization:1 control:1 uk:1 omit:1 appear:1 segmenting:1 before:1 service:1 engineering:1 timing:1 treat:1 despite:1 fluctuation:1 foba:27 path:10 black:1 chose:3 studied:2 collect:1 challenging:1 nsdi:1 deployment:1 limited:1 range:6 unsophisticated:1 unique:1 practical:1 enforces:1 testing:1 vu:1 practice:2 implement:1 procedure:2 area:1 empirical:4 weather:1 convenient:1 projection:2 word:1 pre:1 boyd:1 get:3 cannot:2 onto:1 selection:7 interior:1 scheduling:1 storage:1 context:1 applying:2 www:1 tariq:1 deterministic:1 center:2 attention:2 convex:1 formulate:1 splitting:1 recovery:1 pure:1 slicing:4 wasserman:1 insight:2 estimator:1 d1:2 importantly:1 dominate:1 spanned:1 kay:1 enabled:1 searching:1 handle:1 coordinate:1 increment:1 autonomous:1 annals:1 construction:2 today:1 heavily:1 target:2 anomaly:1 programming:1 us:4 akaike:1 goldberg:1 element:2 expensive:3 gon:2 database:2 observed:2 wang:1 capture:1 worst:1 thousand:1 calculate:1 wj:4 region:1 decrease:1 valuable:1 environment:8 complexity:3 battery:1 wjj:1 n1t:1 depend:1 solving:1 predictive:5 technically:1 patterson:1 basis:6 easily:1 workload:1 train:2 distinct:1 fast:2 describe:1 query:6 tec:1 crowd:1 widely:3 larger:1 solve:3 say:1 supplementary:1 otherwise:1 stanford:1 statistic:3 noisy:1 advantage:1 eigenvalue:1 analytical:1 propose:7 interaction:4 relevant:2 loop:4 achieve:3 description:2 intuitive:2 ky:5 afr:5 cluster:1 transmission:1 requirement:1 mahout:2 produce:2 executing:1 object:2 develop:1 ac:1 stat:2 ij:1 job:1 soc:1 implemented:1 predicted:2 c:1 come:1 differ:1 drawback:3 functionality:1 filter:1 subsequently:1 human:2 material:1 bin:1 require:1 premise:1 fix:2 underdetermined:1 hold:1 around:2 mapping:2 predict:15 gfr:1 sought:2 achieves:3 smallest:5 combinatorial:1 currently:1 create:3 naselaris:1 clearly:2 gaussian:2 always:2 shrinkage:1 varying:1 command:2 focus:1 she:1 improvement:1 kim:1 preselect:1 dependent:2 typically:1 expand:2 going:1 selects:3 semantics:3 prabhakaran:1 arg:4 among:3 flexible:3 html:1 dual:1 plan:3 art:4 platform:1 constrained:1 uc:2 equal:1 construct:8 manually:1 yu:3 fmri:1 future:2 np:1 report:2 inherent:1 few:4 modern:1 randomly:1 composed:1 consisting:1 n1:1 attempt:1 friedman:2 interest:2 highly:4 investigate:1 evaluation:2 bible:1 x22:3 accurate:4 necessary:1 modest:1 fox:1 tree:1 indexed:3 taylor:1 theoretical:2 minimal:3 fitted:1 delete:2 column:5 modeling:7 measuring:1 ordinary:1 cost:6 deviation:4 subset:8 predictor:8 hundred:3 wonder:1 examining:1 too:1 stored:1 chooses:1 adaptively:1 st:2 international:1 gorinevsky:1 automating:1 lee:1 quickly:1 seshia:3 again:1 management:3 containing:1 huang:3 possibly:3 choose:6 expert:13 zhao:1 american:1 rescaling:1 ganapathi:2 li:1 expan:4 coefficient:1 notable:1 explicitly:2 depends:3 tion:1 break:1 lot:1 lab:4 observer:1 infocom:1 recover:1 capability:1 jia:1 contribution:2 square:6 ni:1 accuracy:10 wiener:1 variance:1 who:1 efficiently:2 correspond:1 identify:2 ofthe:1 accurately:5 rejecting:1 none:1 lijiali:1 app:1 executes:1 binyu:1 explain:1 networking:1 energy:2 james:1 obvious:2 naturally:1 proof:1 petros:2 soton:1 gain:1 dataset:2 popular:1 wh:2 knowledge:2 efron:1 ubiquitous:1 segmentation:4 schedule:1 higher:2 follow:1 methodology:11 response:13 pascallin:1 execute:4 box:1 though:1 furthermore:1 just:1 correlation:1 until:1 web:2 nonlinear:5 o:1 lack:1 quality:2 reveal:1 grows:1 believe:1 building:2 effect:1 k22:4 normalized:1 true:5 hence:3 assigned:1 regularization:2 game:4 during:4 width:2 inferior:1 kak:1 criterion:2 generalized:3 theoretic:3 ridge:1 complete:1 demonstrate:1 goldsmith:1 l1:2 image:12 wise:2 contention:1 recently:4 nih:1 apache:1 overview:1 association:1 he:1 interpret:1 significant:1 tuning:1 grid:1 consistency:2 hp:1 trivially:1 mathematics:1 moving:1 longer:1 attracting:1 etc:3 add:1 multivariate:2 recent:1 showed:1 retrieved:1 moderate:1 instrumentation:2 discard:1 scenario:1 store:1 certain:2 server:1 irrelevant:1 inequality:1 success:1 life:1 yi:7 caltech:2 analyzes:2 minimum:1 greater:1 determine:2 signal:1 branch:3 full:3 desirable:2 multiple:1 technical:1 faster:3 adapt:2 profiling:2 long:2 ravikumar:2 sigcomm:1 impact:2 prediction:43 variant:2 regression:26 simplistic:1 kcca:1 basic:1 vision:3 metric:2 arxiv:1 addition:1 whereas:1 xst:1 singular:1 source:1 file:7 subject:1 contrary:1 lafferty:1 jordan:1 joshi:1 intermediate:1 split:1 easy:2 enough:1 automated:1 iterate:1 affect:1 fit:2 xj:6 variety:1 hastie:3 lasso:43 identified:1 bandwidth:1 idea:2 regarding:1 whether:2 motivated:3 handled:1 allocate:1 presnell:1 proceed:1 hardly:1 remark:1 matlab:1 useful:1 latency:1 detailed:2 amount:1 nonparametric:2 statist:1 hardware:2 category:1 generate:2 http:4 outperform:3 exist:1 percentage:3 estimated:1 tibshirani:4 diverse:1 write:1 waiting:1 four:1 threshold:1 lustig:1 aiken:1 deleted:1 neither:1 naik:3 backward:3 v1:2 icde:1 fraction:5 sum:4 year:1 run:2 angle:1 talwar:1 place:1 family:1 reasonable:1 decide:2 decision:4 scaling:1 comparable:1 bit:1 ipod:1 xnew:7 aic:4 quadratic:1 oracle:1 imagej:4 constraint:1 constrain:1 x2:11 software:2 iccad:1 min:7 expanded:5 according:1 combination:4 across:2 instrumented:1 modification:1 making:1 koh:1 heart:1 pipeline:1 computationally:1 resource:4 equation:4 previously:1 count:3 needed:1 tractable:1 instrument:1 available:1 lucene:6 apply:1 hierarchical:1 alternative:4 original:1 denotes:3 standardized:1 ensure:2 include:1 x21:3 assumes:1 opportunity:1 graphical:1 k1:1 build:9 especially:2 society:1 iphone:1 spore:48 added:2 question:1 parametric:5 costly:1 gradient:1 link:1 consumption:1 topic:1 collected:6 trivial:1 reason:1 degenerated:1 code:4 index:2 relationship:4 providing:1 minimizing:3 convincing:1 unfortunately:2 executed:4 potentially:2 favorably:1 design:2 lived:1 policy:1 perform:4 gallant:1 observation:4 datasets:2 enabling:1 descent:1 incorrectly:1 extended:1 variability:1 precise:1 communication:1 varied:1 sharp:1 community:1 sander:1 introduced:1 complement:1 nonlinearly:1 required:1 specified:1 engine:1 omits:1 wh2:2 nip:2 trans:1 address:1 able:3 beyond:1 adversary:2 usually:4 below:3 sparsity:9 challenge:2 program:66 preselected:1 built:1 interpretability:3 memory:2 explanation:1 deleting:2 royal:2 power:1 suitable:1 event:1 natural:1 wainwright:1 regularized:1 predicting:6 residual:5 zhu:1 created:2 extract:4 entertains:1 faced:1 prior:1 understanding:2 literature:2 text:2 review:2 ammar:1 determining:1 relative:6 law:1 embedded:2 rationale:1 allocation:1 versus:1 degree:12 consistent:2 xp:2 prone:1 caltech101:2 repeat:1 aij:1 understand:1 johnstone:1 taking:1 face:1 absolute:1 sparse:25 distributed:1 slice:2 feedback:1 overcome:1 greenberg:1 evaluating:1 forward:6 collection:1 adaptive:7 spam:4 ec:1 polynomially:1 transaction:1 approximate:1 compact:4 nov:1 active:7 reveals:1 corpus:2 tuples:1 xi:3 search:3 decade:1 why:1 promising:1 learn:2 reasonably:1 fse:1 obtaining:1 interact:1 expansion:9 bearing:1 complex:3 zou:1 main:1 whole:2 ling:2 noise:2 osborne:1 fair:2 x1:13 intel:8 slow:1 sub:2 scheduler:2 explicit:3 sparsest:1 candidate:7 answering:1 third:1 externally:1 theorem:2 specific:1 maxi:1 rakhlin:3 x:10 chun:3 gupta:2 dl:2 exists:1 adding:2 effectively:1 execution:44 demand:1 chen:3 visual:1 glmnet:2 springer:1 determines:1 extracted:1 acm:3 conditional:1 goal:1 identity:1 king:1 donoho:1 flash:1 replace:1 hard:2 experimentally:2 change:1 included:1 specifically:1 aided:1 called:1 total:4 player:2 kuno:1 select:11 internal:3 scan:1 wilkerson:1 incorporate:1 evaluate:2 jinzhu:1 tested:1 avoiding:1 correlated:2 |
3,474 | 4,146 | Reward Design via Online Gradient Ascent
Jonathan Sorg
Computer Science and Eng.
University of Michigan
[email protected]
Satinder Singh
Computer Science and Eng.
University of Michigan
[email protected]
Richard L. Lewis
Department of Psychology
University of Michigan
[email protected]
Abstract
Recent work has demonstrated that when artificial agents are limited in their
ability to achieve their goals, the agent designer can benefit by making the agent?s
goals different from the designer?s. This gives rise to the optimization problem of
designing the artificial agent?s goals?in the RL framework, designing the agent?s
reward function. Existing attempts at solving this optimal reward problem do not
leverage experience gained online during the agent?s lifetime nor do they take
advantage of knowledge about the agent?s structure. In this work, we develop a
gradient ascent approach with formal convergence guarantees for approximately
solving the optimal reward problem online during an agent?s lifetime. We show that
our method generalizes a standard policy gradient approach, and we demonstrate
its ability to improve reward functions in agents with various forms of limitations.
1
The Optimal Reward Problem
In this work, we consider the scenario of an agent designer building an autonomous agent. The
designer has his or her own goals which must be translated into goals for the autonomous agent.
We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This
leads to the optimal reward problem of designing the agent?s reward function so as to maximize the
objective reward received by the agent designer.
Typically, the designer assigns his or her own reward to the agent. However, there is ample work
which demonstrates the benefit of assigning reward which does not match the designer?s. For example,
work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering
the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19]
add bonuses to the objective reward to achieve optimism under uncertainty. These approaches
explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that
which would occur using the objective reward function. These methods do not explicitly consider the
optimal reward problem; however, they do show improved performance through reward modification.
In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an
explicit hypothesis about the benefit of reward design?that it helps mitigate the performance loss
caused by computational constraints (bounds) on agent architectures. We considered various types of
agent limitations?limits on planning depth, failure to account for partial observability, and other
erroneous modeling assumptions?and demonstrated the benefits of good reward functions in each
case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior
that is different from the asymptotic behavior achieved with the objective reward function.
In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving
reward functions for a family of bounded agents that behave according to repeated local (from
the current state) model-based planning. We show that this algorithm is capable of improving
the reward functions in agents with computational limitations necessitating small bounds on the
depth of planning, and also from the use of an inaccurate model (which may be inaccurate due
to computationally-motivated approximations). PGRD has few parameters, improves the reward
1
function online during an agent?s lifetime, takes advantage of knowledge about the agent?s structure
(through the gradient computation), and is linear in the number of reward function parameters.
Notation. Formally, we consider discrete-time partially-observable environments with a finite number
of hidden states s ? S, actions a ? A, and observations o ? O; these finite set assumptions are useful
for our theorems, but our algorithm can handle infinite sets in practice. Its dynamics are governed
by a state-transition function P (s0 |s, a) that defines a distribution over next-states s0 conditioned
on current state s and action a, and an observation function ?(o|s) that defines a distribution over
observations o conditioned on current state s.
The agent designer?s goals are specified via the objective reward function RO . At each time step, the
designer receives reward RO (st ) ? [0, 1] based on the current state st of the environment, where
the subscript denotes time. The designer?s objective
is the expected
mean objective reward
i
h return
PN
obtained over an infinite horizon, i.e., limN ?? E N1 t=0 RO (st ) . In the standard view of RL,
the agent uses the same reward function as the designer to align the interests of the agent and the
designer. Here we allow for a separate agent reward function R(? ). An agent?s reward function can in
general be defined in terms of the history of actions and observations, but is often more pragmatically
defined in terms of some abstraction of history. We define the agent?s reward function precisely in
Section 2.
Optimal Reward Problem. An RL agent attempts to act so as to maximize its own cumulative
reward, or return. Crucially, as a result, the sequence of environment-states {st }?
t=0 is affected by
the choice of reward function; therefore, the agent designer?s return is affected as well. The optimal
reward problem arises from the fact that while the objective reward function is fixed as part of the
problem description, the reward function is a choice to be made by the designer. We capture this
choice abstractly by letting the reward be parameterized by some vector of parameters ? chosen from
space of parameters ?. Each ? ? ? specifies a reward function R(? ; ?) which in turn produces a
distribution over environment state sequences via whatever RL method
agent uses. The expected
h the
i
PN
1
return obtained by the designer for choice ? is U(?) = limN ?? E N t=0 RO (st )R(?; ?) . The
optimal reward parameters are given by the solution to the optimal reward problem [16, 17, 18]:
#
"
N
1 X
?
RO (st )R(?; ?) .
(1)
? = arg max U(?) = arg max lim E
???
??? N ??
N
t=0
Our previous research on solving the optimal reward problem has focused primarily on the properties
of the optimal reward function and its correspondence to the agent architecture and the environment [16, 17, 18]. This work has used inefficient exhaustive search methods for finding good
approximations to ?? (though there is recent work on using genetic algorithms to do this [6, 9, 12]).
Our primary contribution in this paper is a new convergent online stochastic gradient method for
finding approximately optimal reward functions. To our knowledge, this is the first algorithm that
improves reward functions in an online setting?during a single agent?s lifetime.
In Section 2, we present the PGRD algorithm, prove its convergence, and relate it to OLPOMDP [2],
a policy gradient algorithm. In Section 3, we present experiments demonstrating PGRD?s ability to
approximately solve the optimal reward problem online.
2
PGRD: Policy Gradient for Reward Design
PGRD builds on the following insight: the agent?s planning algorithm procedurally converts the
reward function into behavior; thus, the reward function can be viewed as a specific parameterization
of the agent?s policy. Using this insight, PGRD updates the reward parameters by estimating the
gradient of the objective return with respect to the reward parameters, ?? U(?), from experience, using
standard policy gradient techniques. In fact, we show that PGRD can be viewed as an (independently
interesting) generalization of the policy gradient method OLPOMDP [2]. Specifically, we show that
OLPOMDP is special case of PGRD when the planning depth d is zero. In this section, we first
present the family of local planning agents for which PGRD improves the reward function. Next, we
develop PGRD and prove its convergence. Finally, we show that PGRD generalizes OLPOMDP and
discuss how adding planning to OLPOMDP affects the space of policies available to the optimization
method.
2
1
2
3
4
5
Input: T , ?0 , {?t }?
t=0 , ?, ?
o0 , i0 = initializeStart();
for t = 0, 1, 2, 3, . . . do
?a Qt (a; ?t ) = plan(it , ot , T, R(it , ?, ?; ?t ), d,?);
at ? ?(a|it ; Qt );
rt+1 , ot+1 = takeAction(at );
?
6
7
8
9
?(a |i ;Q )
t t
t
t
zt+1 = ?zt + ??(a
;
t |it ;Qt )
?t+1 = ?t + ?t (rt+1 zt+1 ? ??t ) ;
it+1 = updateInternalState(it , at , ot+1 );
end
Figure 1: PGRD (Policy Gradient for Reward Design) Algorithm
A Family of Limited Agents with Internal State. Given a Markov model T defined over the
observation space O and action space A, denote T (o0 |o, a) the probability of next observation o0
given that the agent takes action a after observing o. Our agents use the model T to plan. We do not
assume that the model T is an accurate model of the environment. The use of an incorrect model is
one type of agent limitation we examine in our experiments. In general, agents can use non-Markov
models defined in terms of the history of observations and actions; we leave this for future work.
The agent maintains an internal state feature vector it that is updated at each time step using it+1 =
updateInternalState(it , at , ot+1 ). The internal state allows the agent to use reward functions
that depend on the agent?s history. We consider rewards of the form R(it , o, a; ?t ) = ?tT ?(it , o, a),
where ?t is the reward parameter vector at time t, and ?(it , o, a) is a vector of features based on
internal state it , planning state o, and action a. Note that if ? is a vector of binary indicator features,
this representation allows for arbitrary reward functions and thus the representation is completely
general. Many existing methods use reward functions that depend on history. Reward functions based
on empirical counts of observations, as in PAC-MDP approaches [5, 20], provide some examples;
see [14, 15, 13] for others. We present a concrete example in our empirical section.
At each time step t, the agent?s planning algorithm, plan, performs depth-d planning using the model
T and reward function R(it , o, a; ?t ) with current internal state it and reward parameters ?t . Specifically, the agent computes
a d-step Q-value function Qd (it , ot , a; ?t ) ?a ? A, where Qd (it , o, a; ?t ) =
P
R(it , o, a; ?t ) + ? o0 ?O T (o0 |o, a) maxb?A Qd?1 (it , o0 , b; ?t ) and Q0 (it , o, a; ?t ) = R(it , o, a; ?t ).
We emphasize that the internal state it and reward parameters ?t are held invariant while planning.
Note that the d-step Q-values are only computed for the current observation ot , in effect by building
a depth-d tree rooted at ot . In the d = 0 special case, the planning procedure completely ignores the
model T and returns Q0 (it , ot , a; ?t ) = R(it , ot , a; ?t ). Regardless of the value of d, we treat the
end result of planning as providing a scoring function Qt (a; ?t ) where the dependence on d, it and
ot is dropped from the notation. To allow for gradient calculations, our agents act according to the
? Qt (a;?t )
def
Boltzmann (soft-max) stochastic policy parameterized by Q: ?(a|it ; Qt ) = Pe e? Qt (b;?t ) , where ?
b
is a temperature parameter that determines how stochastically the agent selects the action with the
highest score. When the planning depth d is small due to computational limitations, the agent cannot
account for events beyond the planning depth. We examine this limitation in our experiments.
Gradient Ascent. To develop a gradient algorithm for improving the reward function, we need
to compute the gradient of the objective return with respect to ?: ?? U(?). The main insight is to
break the gradient calculation into the calculation of two gradients. The first is the gradient of the
objective return with respect to the policy ?, and the second is the gradient of the policy with respect
to the reward function parameters ?. The first gradient is exactly what is computed in standard
policy gradient approaches [2]. The second gradient is challenging because the transformation from
reward parameters to policy involves a model-based planning procedure. We draw from the work of
Neu and Szepesv?ari [10] which shows that this gradient computation resembles planning itself. We
develop PGRD, presented in Figure 1, explicitly as a generalization of OLPOMDP, a policy gradient
algorithm developed by Bartlett and Baxter [2], because of its foundational simplicity relative to
other policy-gradient algorithms such as those based on actor-critic methods (e.g., [4]). Notably, the
reward parameters are the only parameters being learned in PGRD.
3
PGRD follows the form of OLPOMDP (Algorithm 1 in Bartlett and Baxter [2]) but generalizes it
in three places. In Figure 1 line 3, the agent plans to compute the policy, rather than storing the
policy directly. In line 6, the gradient of the policy with respect to the parameters accounts for the
planning procedure. In line 8, the agent maintains a general notion of internal state that allows for
richer parameterization of policies than typically considered (similar to Aberdeen and Baxter [1]).
The algorithm takes as parameters a sequence of learning rates {?k }, a decaying-average parameter
?, and regularization parameter ? > 0 which keeps the the reward parameters ? bounded throughout
learning. Given a sequence of calculations of the gradient of the policy with respect to the parameters,
??t ?(at |it ; Qt ), the remainder of the algorithm climbs the gradient of objective return ?? U(?) using
OLPOMDP machinery. In the next subsection, we discuss how to compute ??t ?(at |it ; Qt ).
Computing the Gradient of the Policy with respect to Reward. For the Boltzmann distribution, the gradient of the policy with respect to the
P reward parameters is given by the equation
??t ?(a|it ; Qt ) = ? ? ?(a|Qt )[??t Qt (a|it ; ?t ) ? b?A ??t Qt (b; ?t )], where ? is the Boltzmann
temperature (see [10]). Thus, computing ??t ?(a|it ; Qt ) reduces to computing ??t Qt (a; ?t ).
The value of Qt depends on the reward parameters ?t , the model, and the planning depth. However,
as we present below, the process of computing the gradient closely resembles the process of planning
itself, and the two computations can be interleaved. Theorem 1 presented below is an adaptation
of Proposition 4 from Neu and Szepesv?ari [10]. It presents the gradient computation for depth-d
planning as well as for infinite-depth discounted planning. We assume that the gradient of the reward
function with respect to the parameters is bounded: sup?,o,i,a k?? R(i, o, a, ?)k < ?. The proof of
the theorem follows directly from Proposition 4 of Neu and Szepesv?ari [10].
Theorem 1. Except on a set of measure zero, for any depth d, the gradient ?? Qd (o, a; ?) exists and
is given by the recursion (where we have dropped the dependence on i for simplicity)
X
X
? d?1 (b|o0 )?? Qd?1 (o0 , b; ?),
(2)
?? Qd (o, a; ?) = ?? R(o, a; ?) + ?
T (o0 |o, a)
o0 ?O
b?A
where ?? Q0 (o, a; ?) = ?? R(o, a; ?) and ? d (a|o) ? arg maxa Qd (o, a; ?) is any policy that is
greedy with respect to Qd . The result also holds for ?? Q? (o, a; ?) = ?? limd?? Qd (o, a; ?).
The Q-function will not be differentiable when there are multiple optimal policies. This is reflected in the arbitrary choice of ? in the gradient calculation. However, it was shown by Neu and
Szepesv?ari [10] that even for values of ? which are not differentiable, the above computation produces
a valid calculation of a subgradient; we discuss this below in our proof of convergence of PGRD.
Convergence of PGRD (Figure 1). Given a particular fixed reward function R(?; ?), transition
model T , and planning depth, there is a corresponding fixed randomized policy ?(a|i; ?)?where
we have explicitly represented the reward?s dependence on the internal state vector i in the policy
parameterization and dropped Q from the notation as it is redundant given that everything else is
fixed. Denote the agent?s internal-state update as a (usually deterministic) distribution ?(i0 |i, a, o).
Given a fixed reward parameter vector ?, the joint environment-state?internal-state transitions can be
modeled as a Markov chain with a |S||I| P
? |S||I| transition matrix M (?) whose entries are given by
Mhs,ii,hs0 ,i0 i (?) = p(hs0 , i0 i|hs, ii; ?) = o,a ?(i0 |i, a, o)?(o|s0 )P (s0 |s, a)?(a|i; ?). We make the
following assumptions about the agent and the environment:
Assumption 1. The transition matrix M (?) of the joint environment-state?internal-state Markov
chain has a unique stationary distribution ?(?) = [?s1 ,i1 (?), ?s2 ,i2 (?), . . . , ?s|S| ,i|I| (?)] satisfying
the balance equations ?(?)M (?) = ?(?), for all ? ? ?.
Assumption 2. During its execution, PGRD (Figure 1) does not reach a value of it , and ?t at which
?(at |it , Qt ) is not differentiable with respect to ?t .
It follows from Assumption 1 that the objective return, U(?), is independent of the start state. The
original OLPOMDP convergence proof [2] has a similar condition that only considers environment
states. Intuitively, this condition allows PGRD to handle history-dependence of a reward function in
the same manner that it handles partial observability in an environment. Assumption 2 accounts for
the fact that a planning algorithm may not be fully differentiable everywhere. However, Theorem 1
showed that infinite and bounded-depth planning is differentiable almost everywhere (in a measure
theoretic sense). Furthermore, this assumption is perhaps stronger than necessary, as stochastic
approximation algorithms, which provide the theory upon which OLPOMDP is based, have been
shown to converge using subgradients [8].
4
In order to state the convergence theorem, we must define the approximate gradient which OLPOMDP
PT
def
e ? U(?) =
calculates. Let the approximate gradient estimate be ?
limT ?? t=1 rt zt for a fixed ? and
?
PGRD parameter ?, where zt (in Figure 1) represents a time-decaying average of the ??t ?(at |it , Qt )
e ? U(?) is close to the true value ?? U(?)
calculations. It was shown by Bartlett and Baxter [2] that ?
?
for large values of ?. Theorem 2 proves that PGRD converges to a stable equilibrium point based on
this approximate gradient measure. This equilibrium point will typically correspond to some local
optimum in the return function U(?). Given our development and assumptions, the theorem is a
straightforward extension of Theorem 6 from Bartlett and Baxter [2] (proof omitted).
P?
Theorem
2. Given ? ? [0, 1), ? > 0, and a sequence of step sizes ?t satisfying t=0 ?t = ? and
P?
2
t=0 (?t ) < ?, PGRD produces a sequence of reward parameters ?t such that ?t ? L as t ? ?
e?
a.s., where L is the set of stable equilibrium points of the differential equation ??
?t = ?? U(?) ? ??.
PGRD generalizes OLPOMDP. As stated above, OLPOMDP, when it uses a Boltzmann distribution
in its policy representation (a common case), is a special case of PGRD when the planning depth
is zero. First, notice that in the case of depth-0 planning, Q0 (i, o, a; ?) = R(i, o, a, ?), regardless of the transition model and reward parameterization. We can also see from Theorem 1 that
?? Q0 (i, o, a; ?) = ?? R(i, o, a; ?). Because R(i, o, a; ?) can be parameterized arbitrarily, PGRD
can be configured to match standard OLPOMDP with any policy parameterization that also computes
a score function for the Boltzmann distribution.
In our experiments, we demonstrate that choosing a planning depth d > 0 can be beneficial over using
OLPOMDP (d = 0). In the remainder of this section, we show theoretically that choosing d > 0
does not hurt in the sense that it does not reduce the space of policies available to the policy gradient
method. Specifically, we show that when using an expressive enough reward parameterization,
PGRD?s space of policies is not restricted relative to OLPOMDP?s space of policies. We prove the
result for infinite planning, but the extension to depth-limited planning is straightforward.
Theorem 3. There exists a reward parameterization such that, for an arbitrary transition model
T , the space of policies representable by PGRD with infinite planning is identical to the space of
policies representable by PGRD with depth 0 planning.
Proof. Ignoring internal state for now (holding it constant), let C(o, a) be an arbitrary reward
function used by PGRD with depth 0 planning. Let R(o, a; ?) be a reward function for PGRD with
infinite (d = ?) planning. The depth-? agent uses the planning result Q? (o, a; ?) to act, while the
depth-0 agent uses the function C(o, a) to act. Therefore, it suffices to show that one can always
choose ? such that the planning
solution Q? (o, a; ?) equals C(o, a). For all o ? O, a ? A, set
P
R(o, a; ?) = C(o, a) ? ? o0 T (o0 |o, a) maxa0 C(o0 , a0 ). Substituting Q? for C, this is the Bellman
optimality equation [22] for infinite-horizon planning. Setting R(o, a; ?) as above is possible if it is
parameterized by a table with an entry for each observation?action pair.
Theorem 3 also shows that the effect of an arbitrarily poor model can be overcome with a good choice
of reward function. This is because a Boltzmann distribution can, allowing for an arbitrary scoring
function C, represent any policy. We demonstrate this ability of PGRD in our experiments.
3
Experiments
The primary objective of our experiments is to demonstrate that PGRD is able to use experience
online to improve the reward function parameters, thereby improving the agent?s obtained objective
return. Specifically, we compare the objective return achieved by PGRD to the objective return
achieved by PGRD with the reward adaptation turned off. In both cases, the reward function is
initialized to the objective reward function. A secondary objective is to demonstrate that when a good
model is available, adding the ability to plan?even for small depths?improves performance relative
to the baseline algorithm of OLPOMDP (or equivalently PGRD with depth d = 0).
Foraging Domain for Experiments 1 to 3: The foraging environment illustrated in Figure 2(a) is a
3 ? 3 grid world with 3 dead-end corridors (rows) separated by impassable walls. The agent (bird) has
four available actions corresponding to each cardinal direction. Movement in the intended direction
fails with probability 0.1, resulting in movement in a random direction. If the resulting direction is
5
Objective Return
0.15
D=6, ?=0 & D=6, ?=5?10 ?5
D=4, ?=2?10 ?4
D=0, ?=5?10 ?4
0.1
0.05
0
D=4, ?=0
D=0, ?=0
1000 2000 3000 4000 5000
Time Steps
C)
Objective Return
B)
A)
0.15
D=6, ?=0 & D=6, ?=5?10 ?5
D=3, ?=3?10 ?3
D=1, ?=3?10 ?4
0.1
D=3, ?=0
0.05
D=0, ?=0.01 & D=1, ?=0
0
1000
2000 3000 4000 5000
D=0, ?=0
Time Steps
Figure 2: A) Foraging Domain, B) Performance of PGRD with observation-action reward features,
C) Performance of PGRD with recency reward features
blocked by a wall or the boundary, the action results in no movement. There is a food source (worm)
located in one of the three right-most locations at the end of each corridor. The agent has an eat
action, which consumes the worm when the agent is at the worm?s location. After the agent consumes
the worm, a new worm appears randomly in one of the other two potential worm locations.
Objective Reward for the Foraging Domain: The designer?s goal is to maximize the average number
of worms eaten per time step. Thus, the objective reward function RO provides a reward of 1.0 when
the agent eats a worm, and a reward of 0 otherwise. The objective return is defined as in Equation (1).
Experimental Methodology: We tested PGRD for depth-limited planning agents of depths 0?6. Recall
that PGRD for the agent with planning depth 0 is the OLPOMDP algorithm. For each depth, we
jointly optimized over the PGRD algorithm parameters, ? and ? (we use a fixed ? throughout
learning). We tested values for ? on an approximate logarithmic scale in the range (10?6 , 10?2 ) as
well as the special value of ? = 0, which corresponds to an agent that does not adapt its reward
function. We tested ? values in the set 0, 0.4, 0.7, 0.9, 0.95, 0.99. Following common practice [3],
we set the ? parameter to 0. We explicitly bound the reward parameters and capped the reward
function output both to the range [?1, 1]. We used a Boltzmann temperature parameter of ? = 100
and planning discount factor ? = 0.95. Because we initialized ? so that the initial reward function
was the objective reward function, PGRD with ? = 0 was equivalent to a standard depth-limited
planning agent.
Experiment 1: A fully observable environment with a correct model learned online. In this
experiment, we improve the reward function in an agent whose only limitation is planning depth,
using (1) a general reward parameterization based on the current observation and (2) a more compact
reward parameterization which also depends on the history of observations.
Observation: The agent observes the full state, which is given by the pair o = (l, w), where l is the
agent?s location and w is the worm?s location.
Learning a Correct Model: Although the theorem of convergence of PGRD relies on the agent having
a fixed model, the algorithm itself is readily applied to the case of learning a model online. In this
experiment, the agent?s model T is learned online based on empirical transition probabilities between
observations (recall this is a fully observable environment). Let no,a,o0 be the number of times that o0
was reached after taking action a after observing o. The agent models the probability of seeing o0 as
n
0
T (o0 |o, a) = P 0o,a,o
.
n
0
o
o,a,o
Reward Parameterizations: Recall that R(i, o, a; ?) = ?T ?(i, o, a), for some ?(i, o, a). (1) In the
observation-action parameterization, ?(i, o, a) is a binary feature vector with one binary feature
for each observation-action pair?internal state is ignored. This is effectively a table representation
over all reward functions indexed by (o, a). As shown in Theorem 3, the observation-action feature
representation is capable of producing arbitrary policies over the observations. In large problems,
such a parameterization would not be feasible. (2) The recency parameterization is a more compact
representation which uses features that rely on the history of observations. The feature vector is
?(i, o, a) = [RO (o, a), 1, ?cl (l, i), ?cl,a (l, a, i)], where RO (o, a) is the objective reward function
defined as above. The feature ?cl (l) = 1 ? 1/c(l, i), where c(l, i) is the number of time steps since
the agent has visited location l, as represented in the agent?s internal state i. Its value is normalized
to the range [0, 1) and is high when the agent has not been to location l recently. The feature
?cl,a (l, a, i) = 1 ? 1/c(l, a, i) is similarly defined with respect to the time since the agent has taken
action a in location l. Features based on recency counts encourage persistent exploration [21, 18].
6
Results & Discussion: Figure 2(b) and Figure 2(c) present results for agents that use the observationaction parameterization and the recency parameterization of the reward function respectively. The
horizontal axis is the number of time steps of experience. The vertical axis is the objective return, i.e.,
the average objective reward per time step. Each curve is an average over 130 trials. The values of d
and the associated optimal algorithm parameters for each curve are noted in the figures. First, note
that with d = 6, the agent is unbounded, because food is never more than 6 steps away. Therefore,
the agent does not benefit from adapting the reward function parameters (given that we initialize
to the objective reward function). Indeed, the d = 6, ? = 0 agent performs as well as the best
reward-optimizing agent. The performance for d = 6 improves with experience because the model
improves with experience (and thus from the curves it is seen that the model gets quite accurate in
about 1500 time steps). The largest objective return obtained for d = 6 is also the best objective
return that can be obtained for any value of d.
Several results can be observed in both Figures 2(b) and (c). 1) Each curve that uses ? > 0 (solid
lines) improves with experience. This is a demonstration of our primary contribution, that PGRD is
able to effectively improve the reward function with experience. That the improvement over time is
not just due to model learning is seen in the fact that for each value of d < 6 the curve for ? > 0
(solid-line) which adapts the reward parameters does significantly better than the corresponding curve
for ? = 0 (dashed-line); the ? = 0 agents still learn the model. 2) For both ? = 0 and ? > 0
agents, the objective return obtained by agents with equivalent amounts of experience increases
monotonically as d is increased (though to maintain readability we only show selected values of
d in each figure). This demonstrates our secondary contribution, that the ability to plan in PGRD
significantly improves performance over standard OLPOMDP (PGRD with d = 0).
There are also some interesting differences between the results for the two different reward function
parameterizations. With the observation-action parameterization, we noted that there always exists a
setting of ? for all d that will yield optimal objective return. This is seen in Figure 2(b) in that all
solid-line curves approach optimal objective return. In contrast, the more compact recency reward
parameterization does not afford this guarantee and indeed for small values of d (< 3), the solid-line
curves in Figure 2(c) converge to less than optimal objective return. Notably, OLPOMDP (d = 0)
does not perform well with this feature set. On the other hand, for planning depths 3 ? d < 6, the
PGRD agents with the recency parameterization achieve optimal objective return faster than the
corresponding PGRD agent with the observation-action parameterization. Finally, we note that this
experiment validates our claim that PGRD can improve reward functions that depend on history.
Experiment 2: A fully observable environment and poor given model. Our theoretical analysis
showed that PGRD with an incorrect model and the observation?action reward parameterization
should (modulo local maxima issues) do just as well asymptotically as it would with a correct model.
Here we illustrate this theoretical result empirically on the same foraging domain and objective
reward function used in Experiment 1. We also test our hypothesis that a poor model should slow
down the rate of learning relative to a correct model.
Poor Model: We gave the agents a fixed incorrect model of the foraging environment that assumes
there are no internal walls separating the 3 corridors.
Reward Parameterization: We used the observation?action reward parameterization. With a poor
model it is no longer interesting to initialize ? so that the initial reward function is the objective
reward function because even for d = 6 such an agent would do poorly. Furthermore, we found that
this initialization leads to excessively bad exploration and therefore poor learning of how to modify
the reward. Thus, we initialize ? to uniform random values near 0, in the range (?10?3 , 10?3 ).
Results: Figure 3(a) plots the objective return as a function of number of steps of experience. Each
curve is an average over 36 trials. As hypothesized, the bad model slows learning by a factor of more
than 10 (notice the difference in the x-axis scales from those in Figure 2). Here, deeper planning
results in slower learning and indeed the d = 0 agent that does not use the model at all learns
the fastest. However, also as hypothesized, because they used the expressive observation?action
parameterization, agents of all planning depths mitigate the damage caused by the poor model and
eventually converge to the optimal objective return.
Experiment 3: Partially observable foraging world. Here we evaluate PGRD?s ability to learn in
a partially observable version of the foraging domain. In addition, the agents learn a model under the
erroneous (and computationally convenient) assumption that the domain is fully observable.
7
0.1
?4
D = 0, ? = 2 ?10
D = 2, ? = 3 ?10 ?5
?5
D = 6, ? = 2 ?10
0.05
D = 0&2&6, ? = 0
0
1
2
3
Time Steps
4
5
x 10
4
0.06
D = 6, ? = 7 ?10
D = 2, ? = 7 ?10 ?4
0.04
D = 1, ? = 7 ?10 ?4
D = 0, ? = 5 ?10 ?4
D = 0, ? = 0
D = 1&2&6, ? = 0
0.02
0
C)
?4
1000 2000 3000 4000 5000
Time Steps
Objective Return
B) 0.08
0.15
Objective Return
Objective Return
A)
2.5
2
x 10
?3
D=6, ?=3?10 ?6
D=0, ?=1?10 ?5
1.5
D=0&6, ?=0
1
0.5
1
2
3
Time Steps
4
5
x 10
4
Figure 3: A) Performance of PGRD with a poor model, B) Performance of PGRD in a partially
observable world with recency reward features, C) Performance of PGRD in Acrobot
Partial Observation: Instead of viewing the location of the worm at all times, the agent can now only
see the worm when it is colocated with it: its observation is o = (l, f ), where f indicates whether the
agent is colocated with the food.
Learning an Incorrect Model: The model is learned just as in Experiment 1. Because of the erroneous
full observability assumption, the model will hallucinate about worms at all the corridor ends based
on the empirical frequency of having encountered them there.
Reward Parameterization: We used the recency parameterization; due to the partial observability,
agents with the observation?action feature set perform poorly in this environment. The parameters ?
are initialized such that the initial reward function equals the objective reward function.
Results & Discussion: Figure 3(b) plots the mean of 260 trials. As seen in the solid-line curves,
PGRD improves the objective return at all depths (only a small amount for d = 0 and significantly
more for d > 0). In fact, agents which don?t adapt the reward are hurt by planning (relative to d = 0).
This experiment demonstrates that the combination of planning and reward improvement can be
beneficial even when the model is erroneous. Because of the partial observability, optimal behavior
in this environment achieves less objective return than in Experiment 1.
Experiment 4: Acrobot. In this experiment we test PGRD in the Acrobot environment [22], a
common benchmark task in the RL literature and one that has previously been used in the testing of
policy gradient approaches [23]. This experiment demonstrates PGRD in an environment in which
an agent must be limited due to the size of the state space and further demonstrates that adding
model-based planning to policy gradient approaches can improve performance.
Domain: The version of Acrobot we use is as specified by Sutton and Barto [22]. It is a two-link
robot arm in which the position of one shoulder-joint is fixed and the agent?s control is limited to 3
actions which apply torque to the elbow-joint.
Observation: The fully-observable state space is 4 dimensional, with two joint angles ?1 and ?2 , and
two joint velocities ?? 1 and ?? 2 .
Objective Reward: The designer receives an objective reward of 1.0 when the tip is one arm?s length
above the fixed shoulder-joint, after which the bot is reset to its initial resting position.
Model: We provide the agent with a perfect model of the environment. Because the environment
is continuous, value iteration is intractable, and computational limitations prevent planning deep
enough to compute the optimal action in any state. The feature vector contains 13 entries. One feature
corresponds to the objective reward signal. For each action, there are 5 features corresponding to
each of the state features plus an additional feature representing the height of the tip: ?(i, o, a) =
[RO (o), {?1 (o), ?2 (o), ?? 1 (o), ?? 2 (o), h(o)}a ]. The height feature has been used in previous work as
an alternative definition of objective reward [23].
Results & Discussion: We plot the mean of 80 trials in Figure 3(c). Agents that use the fixed (? = 0)
objective reward function with bounded-depth planning perform according to the bottom two curves.
Allowing PGRD and OLPOMDP to adapt the parameters ? leads to improved objective return, as
seen in the top two curves in Figure 3(c). Finally, the PGRD d = 6 agent outperforms the standard
OLPOMDP agent (PGRD with d = 0), further demonstrating that PGRD outperforms OLPOMDP.
Overall Conclusion: We developed PGRD, a new method for approximately solving the optimal
reward problem in bounded planning agents that can be applied in an online setting. We showed that
PGRD is a generalization of OLPOMDP and demonstrated that it both improves reward functions in
limited agents and outperforms the model-free OLPOMDP approach.
8
References
[1] Douglas Aberdeen and Jonathan Baxter. Scalable Internal-State Policy-Gradient Methods for POMDPs.
Proceedings of the Nineteenth International Conference on Machine Learning, 2002.
[2] Peter L. Bartlett and Jonathan Baxter. Stochastic optimization of controlled partially observable Markov
decision processes. In Proceedings of the 39th IEEE Conference on Decision and Control, 2000.
[3] Jonathan Baxter, Peter L. Bartlett, and Lex Weaver. Experiments with Infinite-Horizon, Policy-Gradient
Estimation, 2001.
[4] Shalabh Bhatnagar, Richard S. Sutton, M Ghavamzadeh, and Mark Lee. Natural actor-critic algorithms.
Automatica, 2009.
[5] Ronen I. Brafman and Moshe Tennenholtz. R-MAX - A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning. Journal of Machine Learning Research, 3:213?231, 2001.
[6] S. Elfwing, Eiji Uchibe, K. Doya, and H. I. Christensen. Co-evolution of Shaping Rewards and MetaParameters in Reinforcement Learning. Adaptive Behavior, 16(6):400?412, 2008.
[7] J. Zico Kolter and Andrew Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the
26th International Conference on Machine Learning, pages 513?520, 2009.
[8] Harold J. Kushner and G. George Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer, 2nd edition, 2010.
[9] C
? etin Meric?li, Tekin Meric?li, and H. Levent Akin. A Reward Function Generation Method Using Genetic
Algorithms : A Robot Soccer Case Study (Extended Abstract). In Proc. of the 9th Int. Conf. on Autonomous
Agents and Multiagent Systems (AAMAS 2010), number 2, pages 1513?1514, 2010.
[10] Gergely Neu and Csaba Szepesv?ari. Apprenticeship learning using inverse reinforcement learning and
gradient methods. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages
295?302, 2007.
[11] Andrew Y. Ng, Stuart J. Russell, and D. Harada. Policy invariance under reward transformations: Theory
and application to reward shaping. In Proceedings of the 16th International Conference on Machine
Learning, pages 278?287, 1999.
[12] Scott Niekum, Andrew G. Barto, and Lee Spector. Genetic Programming for Reward Function Search.
IEEE Transactions on Autonomous Mental Development, 2(2):83?90, 2010.
[13] Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V. Hafner. Intrinsic Motivation Systems for Autonomous
Mental Development. IEEE Transactions on Evolutionary Computation, 11(2):265?286, April 2007.
[14] J?urgen Schmidhuber. Curious model-building control systems. In IEEE International Joint Conference on
Neural Networks, pages 1458?1463, 1991.
[15] Satinder Singh, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically Motivated Reinforcement
Learning. In Proceedings of Advances in Neural Information Processing Systems 17 (NIPS), pages
1281?1288, 2005.
[16] Satinder Singh, Richard L. Lewis, and Andrew G. Barto. Where Do Rewards Come From? In Proceedings
of the Annual Conference of the Cognitive Science Society, pages 2601?2606, 2009.
[17] Satinder Singh, Richard L. Lewis, Andrew G. Barto, and Jonathan Sorg. Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective. IEEE Transations on Autonomous Mental Development,
2(2):70?82, 2010.
[18] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Internal Rewards Mitigate Agent Boundedness. In
Proceedings of the 27th International Conference on Machine Learning, 2010.
[19] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Variance-Based Rewards for Approximate Bayesian
Reinforcement Learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence,
2010.
[20] Alexander L. Strehl and Michael L. Littman. An analysis of model-based Interval Estimation for Markov
Decision Processes. Journal of Computer and System Sciences, 74(8):1309?1331, 2008.
[21] Richard S. Sutton. Integrated Architectures for Learning, Planning, and Reacting Based on Approximating
Dynamic Programming. In The Seventh International Conference on Machine Learning, pages 216?224.
1990.
[22] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998.
[23] Lex Weaver and Nigel Tao. The Optimal Reward Baseline for Gradient-Based Reinforcement Learning. In
Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pages 538?545. 2001.
9
| 4146 |@word h:1 trial:4 version:2 polynomial:2 stronger:1 nd:1 crucially:2 eng:2 thereby:1 solid:5 boundedness:1 initial:4 contains:1 score:2 genetic:3 outperforms:3 existing:2 current:7 nuttapong:1 assigning:1 must:3 readily:1 sorg:4 plot:3 update:2 stationary:1 greedy:1 selected:1 intelligence:3 parameterization:24 hallucinate:1 mental:3 provides:1 parameterizations:2 location:9 readability:1 unbounded:1 height:2 limd:1 differential:1 corridor:4 persistent:1 incorrect:4 prove:3 manner:1 apprenticeship:1 theoretically:1 notably:2 indeed:3 expected:2 behavior:6 nor:1 planning:52 examine:2 bellman:1 torque:1 discounted:1 food:3 elbow:1 estimating:1 bounded:7 notation:3 bonus:1 what:1 maxa:1 developed:2 finding:2 transformation:2 csaba:1 guarantee:2 mitigate:3 act:4 exactly:1 ro:9 demonstrates:5 whatever:1 control:3 zico:1 producing:1 dropped:3 local:4 modify:2 treat:1 limit:1 sutton:4 subscript:1 reacting:1 approximately:4 plus:1 bird:1 initialization:1 resembles:2 challenging:1 co:1 fastest:1 limited:8 range:4 unique:1 testing:1 practice:2 recursive:1 procedure:3 foundational:1 empirical:4 adapting:1 significantly:3 convenient:1 seeing:1 get:1 cannot:1 close:1 recency:8 equivalent:2 deterministic:1 demonstrated:3 straightforward:2 regardless:2 independently:1 focused:1 tekin:1 simplicity:2 assigns:1 insight:3 his:2 handle:3 notion:1 autonomous:6 hurt:2 updated:1 pt:1 modulo:1 programming:2 us:7 designing:3 hypothesis:2 velocity:1 satisfying:2 located:1 observed:1 bottom:1 capture:1 movement:3 highest:1 russell:1 consumes:2 observes:1 environment:22 reward:146 littman:1 dynamic:2 ghavamzadeh:1 singh:6 solving:4 depend:3 upon:1 completely:2 translated:1 accelerate:1 joint:8 various:2 represented:2 separated:1 artificial:5 niekum:1 choosing:2 exhaustive:1 whose:2 richer:1 quite:1 solve:1 nineteenth:1 spector:1 otherwise:1 ability:7 abstractly:1 jointly:1 itself:3 validates:1 online:12 advantage:2 sequence:6 differentiable:5 reset:1 remainder:2 adaptation:2 turned:1 poorly:2 achieve:3 adapts:1 description:1 convergence:8 optimum:1 produce:3 perfect:1 leave:1 converges:1 help:1 illustrate:1 develop:5 andrew:7 qt:18 received:1 involves:1 come:1 qd:9 direction:4 closely:1 correct:4 stochastic:5 exploration:3 viewing:1 everything:1 maxa0:1 suffices:1 generalization:3 wall:3 proposition:2 extension:2 hold:1 considered:2 equilibrium:3 claim:1 substituting:1 achieves:1 omitted:1 estimation:2 proc:1 visited:1 largest:1 mit:1 eats:1 always:2 rather:1 pn:2 barto:6 improvement:2 indicates:1 contrast:1 baseline:2 sense:2 abstraction:1 i0:5 inaccurate:2 typically:3 integrated:1 a0:1 her:2 hidden:1 eaten:1 metaparameters:1 selects:1 i1:1 tao:1 arg:3 issue:1 overall:1 development:4 plan:6 special:4 initialize:3 urgen:1 equal:2 never:1 having:2 ng:2 identical:1 represents:1 stuart:1 future:1 others:1 richard:8 few:1 primarily:1 cardinal:1 randomly:1 hafner:1 intended:1 n1:1 maintain:1 attempt:2 interest:1 analyzed:1 held:1 chain:2 accurate:2 capable:2 partial:5 necessary:1 experience:10 encourage:1 machinery:1 tree:1 indexed:1 initialized:3 theoretical:2 increased:1 formalism:1 modeling:1 soft:1 altering:1 entry:3 uniform:1 harada:1 seventh:1 nearoptimal:1 nigel:1 foraging:8 st:6 international:6 randomized:1 lee:2 off:1 tip:2 michael:1 concrete:1 gergely:1 choose:1 stochastically:1 dead:1 conf:1 inefficient:1 cognitive:1 return:33 li:2 account:4 potential:1 int:1 configured:1 kolter:1 explicitly:6 caused:2 depends:2 view:1 break:1 observing:2 sup:1 reached:1 start:1 decaying:2 maintains:2 contribution:3 yves:1 variance:1 correspond:1 yield:1 ronen:1 bayesian:3 bhatnagar:1 pomdps:1 history:9 reach:1 neu:5 definition:1 failure:1 frequency:1 proof:5 associated:1 intrinsically:2 recall:3 knowledge:3 lim:1 improves:10 subsection:1 shaping:3 appears:1 verena:1 reflected:1 methodology:1 improved:2 april:1 though:2 lifetime:4 furthermore:2 just:3 hand:1 receives:2 horizontal:1 expressive:2 pragmatically:1 defines:2 perhaps:1 mdp:2 building:3 effect:2 excessively:1 normalized:1 true:1 hypothesized:2 shalabh:1 evolution:1 regularization:1 q0:5 i2:1 illustrated:1 during:5 rooted:1 noted:2 harold:1 soccer:1 tt:1 demonstrate:5 necessitating:1 theoretic:1 performs:2 temperature:3 ari:5 recently:1 common:3 rl:6 empirically:2 resting:1 blocked:1 chentanez:1 rd:1 grid:1 similarly:1 baveja:1 stable:2 actor:2 longer:1 robot:2 add:1 align:1 own:3 recent:3 showed:3 perspective:1 optimizing:1 scenario:1 schmidhuber:1 binary:3 arbitrarily:2 scoring:2 seen:5 additional:1 george:1 converge:3 maximize:3 redundant:1 monotonically:1 dashed:1 ii:2 signal:1 multiple:1 full:2 reduces:1 match:2 adapt:3 calculation:7 faster:1 controlled:1 calculates:1 scalable:1 iteration:1 represent:2 limt:1 achieved:3 szepesv:5 addition:1 impassable:1 interval:1 else:1 source:1 limn:2 ot:10 ascent:3 ample:1 climb:1 curious:1 near:2 leverage:1 maxb:1 baxter:8 enough:2 affect:1 psychology:1 gave:1 architecture:3 observability:5 reduce:1 whether:1 motivated:3 optimism:1 o0:17 bartlett:6 akin:1 peter:2 afford:1 action:27 deep:1 ignored:1 useful:1 amount:2 discount:1 eiji:1 specifies:1 notice:2 bot:1 designer:17 per:2 discrete:1 affected:2 four:1 demonstrating:2 prevent:1 douglas:1 uchibe:1 asymptotically:1 subgradient:1 convert:1 angle:1 parameterized:4 uncertainty:4 everywhere:2 procedurally:1 inverse:1 place:1 family:3 throughout:2 almost:1 doya:1 draw:1 decision:3 interleaved:1 bound:3 def:2 convergent:1 correspondence:1 encountered:1 annual:1 occur:1 constraint:1 precisely:1 optimality:1 subgradients:1 eat:1 department:1 according:3 combination:1 representable:2 poor:8 beneficial:2 making:1 modification:1 s1:1 christensen:1 intuitively:1 invariant:1 restricted:1 taken:1 computationally:2 equation:5 previously:1 turn:1 discus:3 count:2 eventually:1 letting:1 end:5 umich:3 generalizes:4 available:4 olpomdp:25 apply:1 away:1 pierre:1 alternative:1 slower:1 original:1 denotes:1 assumes:1 top:1 kushner:1 build:1 prof:1 approximating:1 society:1 objective:52 lex:2 moshe:1 damage:1 primary:3 rt:3 dependence:4 evolutionary:2 gradient:46 separate:1 link:1 separating:1 considers:1 length:1 modeled:1 providing:1 balance:1 demonstration:1 equivalently:1 rickl:1 relate:1 holding:1 stated:1 rise:1 slows:1 kaplan:1 design:5 zt:5 policy:43 boltzmann:7 perform:3 allowing:2 vertical:1 observation:29 markov:6 benchmark:1 finite:2 behave:1 extended:1 shoulder:2 arbitrary:6 pair:3 specified:2 optimized:1 learned:4 nip:1 capped:1 beyond:1 able:2 tennenholtz:1 below:3 usually:1 scott:1 including:1 max:4 event:1 natural:1 pgrd:63 rely:1 indicator:1 recursion:1 weaver:2 arm:2 representing:1 improve:6 axis:3 literature:1 asymptotic:2 relative:5 loss:1 fully:6 multiagent:1 interesting:3 limitation:8 generation:1 agent:105 s0:4 storing:1 critic:2 strehl:1 row:1 brafman:1 free:1 formal:1 allow:2 deeper:1 taking:1 benefit:5 curve:12 overcome:1 depth:33 boundary:1 transition:8 cumulative:1 valid:1 computes:2 ignores:1 world:3 made:1 reinforcement:9 adaptive:1 transaction:2 approximate:6 observable:10 emphasize:1 implicitly:1 compact:3 colocated:2 keep:1 satinder:6 automatica:1 don:1 search:2 continuous:1 table:2 learn:3 ignoring:1 improving:4 cl:4 domain:7 main:1 s2:1 motivation:1 edition:1 repeated:1 aamas:1 slow:1 fails:1 position:2 explicit:1 governed:1 pe:1 learns:1 theorem:15 down:1 erroneous:4 bad:2 specific:1 pac:2 frederic:1 exists:3 intractable:1 intrinsic:1 adding:3 effectively:2 gained:1 acrobot:4 execution:1 conditioned:2 horizon:3 aberdeen:2 michigan:3 logarithmic:1 yin:1 partially:5 springer:1 corresponds:2 determines:1 lewis:5 relies:1 goal:8 viewed:2 hs0:2 feasible:1 transations:1 infinite:9 specifically:4 except:1 etin:1 worm:12 secondary:2 invariance:1 experimental:1 oudeyer:1 formally:1 internal:17 mark:1 arises:1 jonathan:7 alexander:1 evaluate:1 tested:3 |
3,475 | 4,147 | Kernel Descriptors for Visual Recognition
Liefeng Bo
University of Washington
Seattle WA 98195, USA
Xiaofeng Ren
Intel Labs Seattle
Seattle WA 98105, USA
Dieter Fox
University of Washington & Intel Labs Seattle
Seattle WA 98195 & 98105, USA
Abstract
The design of low-level image features is critical for computer vision algorithms.
Orientation histograms, such as those in SIFT [16] and HOG [3], are the most
successful and popular features for visual object and scene recognition. We highlight the kernel view of orientation histograms, and show that they are equivalent
to a certain type of match kernels over image patches. This novel view allows
us to design a family of kernel descriptors which provide a unified and principled framework to turn pixel attributes (gradient, color, local binary pattern, etc.)
into compact patch-level features. In particular, we introduce three types of match
kernels to measure similarities between image patches, and construct compact
low-dimensional kernel descriptors from these match kernels using kernel principal component analysis (KPCA) [23]. Kernel descriptors are easy to design and
can turn any type of pixel attribute into patch-level features. They outperform
carefully tuned and sophisticated features including SIFT and deep belief networks. We report superior performance on standard image classification benchmarks: Scene-15, Caltech-101, CIFAR10 and CIFAR10-ImageNet.
1 Introduction
Image representation (features) is arguably the most fundamental task in computer vision. The
problem is highly challenging because images exhibit high variations, are highly structured, and
lie in high dimensional spaces. In the past ten years, a large number of low-level features over
images have been proposed. In particular, orientation histograms such as SIFT [16] and HOG [3]
are the most popular low-level features, essential to many computer vision tasks such as object
recognition and 3D reconstruction. The success of SIFT and HOG naturally raises questions on how
they measure the similarity between image patches, how we should understand the design choices in
them, and whether we can find a principled way to design and learn comparable or superior low-level
image features.
In this work, we highlight the kernel view of orientation histograms and provide a unified way to
low-level image feature design and learning. Our low-level image feature extractors, kernel descriptors, consist of three steps: (1) design match kernels using pixel attributes; (2) learn compact basis
vectors using kernel principle component analysis; (3) construct kernel descriptors by projecting the
infinite-dimensional feature vectors to the learned basis vectors. We show how our framework is
applied to gradient, color, and shape pixel attributes, leading to three effective kernel descriptors.
We validate our approach on four standard image category recognition benchmarks, and show that
our kernel descriptors surpass both manually designed and well tuned low-level features (SIFT) [16]
and sophisticated feature learning approaches (convolutional networks, deep belief networks, sparse
coding, etc.) [10, 26, 14, 24].
1
The most relevant work to this paper is that of efficient match kernels (EMK) [1], which provides
a kernel view to the frequently used Bag-of-Words representation and forms image-level features
by learning compact low dimensional projections or using random Fourier transformations. While
the work on efficient match kernels is interesting, the hand-crafted SIFT features are still used as
the basic building block. Another related work is based on mathematics of the neural response,
which shows that the hierarchical architectures motivated by the neuroscience of the visual cortex is
associated to the derived kernel [24]. Instead, the goal of this paper is to provide a deep understanding of how orientation histograms (SIFT and HOG) work, and we can generalize them and design
novel low-level image features based on the kernel insight. Our kernel descriptors are general and
provide a principled way to convert pixel attributes to patch-level features. To the best of our knowledge, this is the first time that low-level image features are designed and learned from scratch using
kernel methods; they can serve as the foundation of many computer vision tasks including object
recognition.
This paper is organized as follows. Section 2 introduces the kernel view of histograms. Our novel
kernel descriptors are presented in Section 3, followed by an extensive experimental evaluation in
Section 4. We conclude in Section 5.
2
Kernel View of Orientation Histograms
Orientation histograms, such as SIFT [16] and HOG [3], are the most commonly used low-level
features for object detection and recognition. Here we describe the kernel view of such orientation
histograms features, and show how this kernel view can help overcome issues such as orientation
binning. Let ?(z) and m(z) be the orientation and magnitude of the image gradient at a pixel z. In
HOG and SIFT, the gradient orientation of each pixel is discretized into a d?dimensional indicator
vector ?(z) = [?1 (z), ? ? ? , ?d (z)] with
?
1, b d?(z)
2? c = i ? 1
?i (z) =
(1)
0, otherwise
where bxc takes the largest integer less than or equal to x (we will describe soft binning further
below). The feature vector of each pixel z is a weighted indicator vector F (z) = m(z)?(z). Aggregating feature vectors of pixels over an image patch P , we obtain the histogram of oriented
gradients:
X
Fh (P ) =
m(z)?(z)
e
(2)
z?P
pP
2
where m(z)
e
= m(z)/
z?P m(z) + ?g is the normalized gradient magnitude, with ?g a small
constant. P is typically a 4 ? 4 rectangle in SIFT and an 8 ? 8 rectangle in HOG. Without loss of
generality, we consider L2-based normalization here. In object detection [3, 5] and matching based
object recognition [18], linear support vector machines or the L2 distance are commonly applied to
sets of image patch features. This is equivalent to measuring the similarity of image patches using a
linear kernel in the feature map Fh (P ) in kernel space:
X X
Kh (P, Q) = Fh (P )> Fh (Q) =
m(z)
e m(z
e 0 )?(z)> ?(z 0 )
(3)
z?P z 0 ?Q
0
where P and Q are patches usually from two different images. In Eq. 3, both km
e (z, z ) =
0
0
>
0
m(z)
e m(z
e ) and k? (z, z ) = ?(z) ?(z ) are the inner product of two vectors and thus are positive
definite kernels. Therefore, Kh (P, Q) is a match kernel over sets (here the sets are image patches)
as in [8, 1, 11, 17, 7]. Thus Eq. 3 provides a kernel view of HOG features over image patches. For
simplicity, we only use one image patch here; it is straightforward to extend to sets of image patches.
The hard binning underlying Eq. 1 is only for ease of presentation. To get a kernel view of soft
binning [13], we only need to replace the delta function in Eq. 1 by the following, soft ?(?) function:
?i (z) = max(cos(?(z) ? ai )9 , 0)
(4)
where a(i) is the center of the i?th bin. In addition, one can easily include soft spatial binning by
normalizing gradient magnitudes using the corresponding spatial weights. The L2 distance between
P and Q can be expressed as D(P, Q) = 2 ? 2F (P )> F (Q) as we know F (P )> F (P ) = 1, and the
kernel view can be provided in the same manner.
2
Figure 1: Pixel attributes. Left: Gradient orientation representation. To measure similarity between
two pixel orientation gradients ? and ?0 , we use the L2 norm between the normalized gradient vectors
?e = [sin(?) cos(?)] and ?e0 = [sin(?0 ) cos(?0 )]. The red dots represent the normalized gradient
vectors, and the blue line represents the distance between them. Right: Local binary patterns. The
values indicate brightness of pixels in a 3?3 patch. Red pixels have intensities larger than the center
pixel, blue pixels are darker. The 8-dimensional indicator vector is the resulting local binary pattern.
0
Note that the kernel km
e (z, z ) measuring the similarity of gradient magnitudes of two pixels is linear
in gradient magnitude. k? (z, z 0 ) measures the similarity of gradient orientations of two pixels: 1 if
two gradient orientations are in the same bin, and 0 otherwise (Eq.1, hard binning). As can be seen,
this kernel introduces quantization errors and could lead to suboptimal performance in subsequent
stages of processing. While soft binning results in a smoother kernel function, it still suffers from
discretization. This motivates us to search for alternative match kernels which can measure the
similarity of image patches more accurately.
3 Kernel Descriptors
3.1 Gradient, Color, and Shape Match Kernels
We introduce the following gradient match kernel, Kgrad , to capture image variations:
X X
e
e 0 ))kp (z, z 0 )
Kgrad (P, Q) =
m(z)
e m(z
e 0 )ko (?(z),
?(z
(5)
z?P z 0 ?Q
where kp (z, z 0 ) = exp(??p kz ? z 0 k2 ) is a Gaussian position kernel with z denoting the 2D position
e
e 0 )) = exp(??o k?(z)
e ? ?(z
e 0 )k2 )
of a pixel in an image patch (normalized to [0, 1]), and ko (?(z),
?(z
is a Gaussian kernel over orientations. To estimate the difference between orientations at pixels z
and z 0 , we use the following normalized gradient vectors in the kernel function ko :
e = [sin(?(z)) cos(?(z))] .
?(z)
(6)
The L2 distance between such vectors measures the difference of gradient orientations very well (see
Figure 1). Note that computing the L2 distance on the raw angle values ? instead of the normalized
gradient vectors ?e would cause wrong similarity in some cases. For example, consider the two angles
2? ? 0.01 and 0.01, which have very similar orientation but very large L2 distance.
To summarize, our gradient match kernel Kgrad consists of three kernels: the normalized linear
kernel is the same as that in the orientation histograms, weighting the contribution of each pixel
using gradient magnitudes; the orientation kernel ko computes the similarity of gradient orientations;
and the position Gaussian kernel kp measures how close two pixels are spatially.
The kernel view of orientation histograms provides a simple, unified way to turn pixel attributes into
patch-level features. One immediate extension is to construct color match kernels over pixel values:
X X
Kcol (P, Q) =
kc (c(z), c(z 0 ))kp (z, z 0 )
(7)
z?P z 0 ?Q
where c(z) is the pixel color at? position z (intensity
? for gray images and RGB values for color
images). kc (c(z), c(z 0 )) = exp ??c kc(z) ? c(z 0 )k2 measures how similar two pixel values are.
3
While the gradient match kernel can capture image variations and the color match kernel can describe image appearance, we find that a match kernel over local binary patterns can capture local
shape more effectively: [19]:
X X
Kshape (P, Q) =
se(z)e
s(z 0 )kb (b(z), b(z 0 ))kp (z, z 0 )
(8)
pP
z?P z 0 ?Q
2
where se(z) = s(z)/
z?P s(z) + ?s , s(z) is the standard deviation of pixel values in the
3 ? 3 neighborhood around z, ?s a small constant, and b(z) is binary column vector binarizes
the pixel value differences in a local window around z (see Fig. 1(right)). The normalized linear kernel se(z)e
s(z 0 ) weighs the contribution of each local binary pattern, and the Gaussian kernel
0
kb (b(z), b(z )) = exp(??b kb(z) ? b(z 0 )k2 ) measures shape similarity through local binary patterns.
Match kernels defined over various pixel attributes provide a unified way to generate a rich, diverse
visual feature set, which has been shown to be very successful to boost recognition accuracy [6]. As
validated by our own experiments, gradient, color and shape match kernels are strong in their own
right and complement one another. Their combination turn out to be always (much) better than the
best individual feature.
3.2
Learning Compact Features
Match kernels provide a principled way to measure the similarity of image patches, but evaluating
kernels can be computationally expensive when image patches are large [1]. Both for computational
efficiency and for representational convenience, we present an approach to extract the compact lowdimensional features from match kernels: (1) uniformly and densely sample sufficient basis vectors
from support region to guarantee accurate approximation to match kernels; (2) learn compact basis
vectors using kernel principal component analysis. An important advantage of our approach is that
no local minima are involved, unlike constrained kernel singular value decomposition [1].
We now describe how our compact low-dimensional features are extracted from the gradient kernel
Kgrad ; features for the other kernels can be generated the same way. Rewriting the kernels in Eq. 5
>
e
e 0 )) = ?o (?(z))
e
e 0 )), kp (z, z 0 ) = ?p (z)> ?p (z 0 ), we can derive
as inner products ko (?(z),
?(z
?o (?(z
the following feature over image patches:
X
e
Fgrad (P ) =
m(z)?
e
(9)
o (?(z)) ? ?p (z)
z?P
where ? is the tensor product. For this feature, it follows that Fgrad (P )> Fgrad (Q) = Kgrad (P, Q).
Because we use Gaussian kernels, Fgrad (P ) is an infinite-dimensional vector.
A straightforward way to dimension reduction is to sample sufficient image patches from training
images and perform KPCA for match kernels. However, such approach makes the learned features
depend on the task at hand. Moreover, KPCA can become computationally infeasible when the
number of patches is very large.
Sufficient Finite-dimensional Approximation. We present an approach to approximate match kernels directly without requiring any image. Following classic methods, we learn finite-dimensional
features by projecting Fgrad (P ) into a set of basis vectors. A key issue in this projection process
is how to choose a set of basis vectors which makes the finite-dimensional kernel approximate well
the original kernel. Since pixel attributes are low-dimensional vectors, we can achieve a very good
approximation by sampling sufficient basis vectors using a fine grid over the support region. For
e
e 0 )) over gradient orientation. Given a set of baexample, consider the Gaussian kernel ko (?(z),
?(z
do
sis vectors {?o (xi )}i=1 where xi are sampled normalized gradient vectors, we can approximate a
e
infinite-dimensional vector ?o (?(z))
by its projection into the space spanned by the set of these do
basis vectors. Following the formulation in [1], such a procedure is equivalent to using a finitedimensional kernel:
h
i> h
i
?
?
e
e
e 0 )) = ko (?(z),
e
e 0 ), X) = Gko (?(z),
e
e 0 ), X)
ko (?(z),
?(z
X)> Ko ?1 ij ko (?(z
X)
Gko (?(z
(10)
>
e
e
e
where ko (?(z), X) = [ko (?(z), x1 ), ? ? ? , ko (?(z), xdo )] is a do ? 1 vector, Ko is a do ? do matrix
e
e
with Koij = ko (xi , xj ), and Ko ?1 = G> G. The resulting feature map ?eo (?(z))
= Gko (?(z),
X)
4
?4
5
1
Ground Truth
10 Grid App
14 Grid App
16 Grid App
0.6
Kgrad
4
Kcol
K
3
RMSE
0.8
x 10
0.4
shape
2
1
0.2
0
0
0
2
4
?1
6
0
100
200
300
400
Dimensionality
500
e
e 0 )) and its
Figure 2: Finite dimensional approximation. Left: the orientation kernel ko (?(z),
?(z
e 0 ) is fixed to
finite-dimensional approximation. ?o is set to be 5 (as used in the experiments) and ?(z
e
[1 0]. All curves show kernel values as functions of ?(z). The red line is the ground truth kernel function ko , and the black, green and blue lines are the finite approximation kernels with different grid
sizes. Right: root mean square error (RMSE) between KPCA approximation and the corresponding
match kernel as a function of dimensionality. We compute the RMSE on randomly sampled 10000
datapoints. The three lines show the RMSE between the kernels Kgrad (red) and Kcol (blue) and
Kshape (green), and their respective approximation kernels.
is now only do ?dimensional. In a similar manner, we can also approximate the kernels kp , kc and kb .
P
e
The finite-dimensional feature for the gradient match kernel is Fegrad (P ) = z?P m(z)
e ?eo (?(z))
?
e
?p (z), and may be efficiently used as features over image patches. We validate our intuition in Fig.
2. As we expect, the approximation error rapidly drops with increasing grid sizes. When the grid
size is larger than 16, the finite kernel and the original kernel become virtually indistinguishable.
For the shape kernel over local binary patterns, because the variables are binary, we simply choose
the set of all 28 = 256 basis vectors and thus no approximation error is introduced.
Compact Features. Although Fegrad (P ) is finite-dimensional, the dimensionality can be high due
to the tensor product. For example, consider the shape kernel descriptor: the size of basis vectors
on kernel kb is 256; if we choose the basis vectors of the position kernel kp on a 5 ? 5 regular grid,
the dimensionality of the resulting shape kernel descriptor Fshape would be 256 ? 25 = 6400, too
high for practical purposes. Dense uniform sampling leads to accurate approximation but does not
guarantee orthogonality of the basis vectors, thus introducing redundance. The size of basis vectors
can be further reduced by performing kernel principal component analysis over joint basis vectors:
{?o (x1 ) ? ?p (y1 ), ? ? ? , ?o (xdo ) ? ?p (ydp )}, where ?p (ys ) are basis vectors for the position kernel
and dp is the number of basis vectors. The t?th kernel principal component can be written as
PCt =
dp
do X
X
t
?ij
?o (xi ) ? ?p (yj )
(11)
i=1 j=1
where do and dp are the sizes of basis vectors for the orientation and position kernel, respectively,
t
t t
t
is learned through kernel principal component analysis: K
and ?ij
Pc ? = ? ? , where Kc is a
centered kernel matrix with [Kc ]ijst = ko (xi , xj )kp (ys , yt ) ? 2 i0 ,s0 ko (xi0 , xj )kp (ys0 , yt ) +
P
i0 ,j 0 ,s0 ,t0 ko (xi0 , xj 0 )kp (ys0 , yt0 ). As shown in fig. (2), match kernels can be approximated rather
accurately using the reduced basis vectors by KPCA. Under the framework of kernel principal component analysis, our gradient kernel descriptor (Eq. 5) has the form
(
)
dp
do X
X
X
t
t
e
F
(P ) =
?
m(z)k
e
(12)
o (?(z), xi )kp (z, yj )
grad
ij
i=1 j=1
z?P
The computational bottleneck of extracting kernel descriptors are to evaluate the kernel function
ko kp between pixels. Fortunately, we can compute two kernel values separately at the cost do + dp ,
rather than do dp . Our most expensive kernel descriptor, the shape kernel, takes about 4 seconds in
MATLAB to compute on a typical image (300 ? 300 resolution and 16 ? 16 image patches over
5
8?8 grids). It is about 1.5 seconds for the gradient kernel descriptor, compared to about 0.4 seconds
for SIFT under the same setting. A more efficient GPU-based implementation will certainly reduce
the computation time for kernel descriptors such that real time applications become feasible.
4
Experiments
We compare gradient (KDES-G), color (KDES-C), and shape (KDES-S) kernel descriptors to SIFT
and several other state of the art object recognition algorithms on four publicly available datasets:
Scene-15, Caltech101, CIFAR10, and CIFAR10-ImageNet (a subset of ImageNet). For gradient and
shape kernel descriptors and SIFT, all images are transformed into grayscale ([0, 1]) and resized to be
no larger than 300 ? 300 pixels with preserved ratio. Image intensity or RGB values are normalized
to [0 1]. We extracted all low level features with 16?16 image patches over dense regular grids with
spacing of 8 pixels. We used publicly available dense SIFT code at http://www.cs.unc.edu/ lazebnik [13], which includes spatial binning, soft binning and truncation (nonlinear cutoff at 0.2), and
has been demonstrated to obtain high accuracy for object recognition. For our gradient kernel descriptors we use the same gradient computation as used for SIFT descriptors. We also evaluate the
performance of the combination of the three kernel descriptors (KDES-A) by simply concatenating
the image-level features vectors.
Instead of spatial pyramid kernels, we compute image-level features using efficient match kernels
(EMK), which has been shown to produce more accurate quantization. We consider 1 ? 1, 2 ? 2 and
4 ? 4 pyramid sub-regions (see [1]), and perform constrained kernel singular value decomposition
(CKSVD) to form image-level features, using 1,000 visual words (basis vectors in CKSVD) learned
by K-means from about 100,000 image patch features. We evaluate classification performance with
accuracy averaged over 10 random training/testing splits with the exception of the CIFAR10 dataset,
where we report the accuracy on the test set. We have experimented both with linear SVMs and
Laplacian kernel SVMs and found that Laplacian kernel SVMs over efficient match kernel features
are always better than linear SVMs (see (?4.2)). We use Laplacian kernel SVMs in our experiments
(except for the tiny image dataset CIFAR10).
4.1
Hyperparameter Selection
We select kernel parameters using a subset of ImageNet. We retrieve 8 everyday categories from
the ImageNet collection: apple, banana, box, coffee mug, computer keyboard, laptop, soda can and
water bottle. We choose basis vectors for ko , kc , and kp from 25, 5 ? 5 ? 5 and 5 ? 5 uniform
grids, respectively, which give sufficient approximations to the original kernels (see also Fig. 2).
We optimize the dimensionality of KPCA and match kernel parameters jointly using exhaustive grid
search. Our experiments suggest that the optimal parameter settings are r = 200 (dimensionality
of kernel descriptors), ?o = 5, ?c = 4, ?b = 2, ?p = 3, ?g = 0.8 and ?s = 0.2 (fig. 3). In the
following experiments, we will keep these values fixed, even though the performance may improve
if considering task-dependent hyperparameter selection.
4.2 Benchmark Comparisons
Scene-15. Scene-15 is a popular scene recognition benchmark from [13] which contains 15 scene
categories with 200 to 400 images in each. SIFT features have been extensively used on Scene15. Following the common experimental setting, we train our models on 1,500 randomly selected
images (100 images per category) and test on the rest. We report the averaged accuracy of SIFT,
KDES-S, KDES-C, KDES-G, and KDES-A over 10 random training/test splits in Table 1. As we
see, both gradient and shape kernel descriptors outperform SIFT with a margin. Gradient kernel
descriptors and shape kernel descriptors have similar performance. It is not surprising that the
intensity kernel descriptor has a lower accuracy, as all the images are grayscale. The combination of
the three kernel descriptors further boosts the performance by about 2 percent. Another interesting
finding is that Laplacian kernel SVMs are significantly better than linear SVMs, 86.7%.
In our recognition system, the accuracy of SIFT is 82.2% compared to 81.4% in spatial pyramid
match (SPM). We also tried to replace SIFT features with our gradient and shape kernel descriptors in SPM, and both obtained 83.5% accuracy, 2 percent higher than SIFT features. To our best
knowledge, our gradient kernel descriptor alone outperforms the best published result 84.2% [27].
6
0.85
0.82
KDES?S
KDES?G
0.82
KDES?S
KDES?G
0.8
KDES?S
KDES?G
0.8
0.8
0.75
0.78
0.78
0.76
0.76
0.74
0.7
100
200
300
400
0.74
0
0.2
0.4
0.6
0.8
1
0
2
4
6
Figure 3: Hyperparameter selection. left: Accuracy as functions of feature dimensionality for orientation kernel (KDES-G) and shape kernel (KDES-S), respectively. center: Accuracy as functions of
?g and ?s . right: Accuracy as function of ?o and ?b .
Methods
Linear SVM
Laplacian kernel SVM
SIFT
76.7?0.7
82.2?0.9
KDES-C
38.5?0.4
47.9?0.8
KDES-G
81.6?0.6
85.0?0.6
KDES-S
79.8?0.5
84.9?0.7
KDES-A
81.9?0.6
86.7?0.4
Table 1: Comparisons of recognition accuracy on Scene-15: kernel descriptors and their combination vs SIFT.
Caltech-101. Caltech-101 [15] consists of 9,144 images in 101 object categories and one background category. The number of images per category varies from 31 to 800. Because many researchers have reported their results on Caltech-101, we can directly compare our algorithm to the
existing ones. Following the standard experimental setting, we train classifiers on 30 images and test
on no more than 50 images per category. We report our results in Table 2. We compare our kernel
descriptors with recently published results obtained both by low-level feature learning algorithms,
convolutional deep belief networks (CDBN), and sparse coding methods: invariant predictive sparse
decomposition (IPSD) and locality-constrained linear coding. We observe that SIFT features in conjunction with efficient match kernels work well on this dataset and obtain 70.8% accuracy using a
single patch size, which beat SPM with the same SIFT features by a large margin. Both our gradient
kernel descriptor and shape kernel descriptor are superior to CDBN by a large margin.
We have performed feature extraction with three different patch sizes: 16 ? 16, 25 ? 25 and 31 ? 31
and reached the same conclusions with many other researchers: multiple patch sizes (scales) can
boost the performance by a few percent compared to the single patch size. Notice that both naive
Bayesian nearest neighbor (NBNN) and locality-constrained linear coding should be compared to
our kernel descriptors over multiple patch sizes because both of them used multiple scales to boost
the performance. Using only our gradient kernel descriptor obtains 75.2% accuracy, higher than the
results obtained by all other single feature based methods, to our best knowledge. Another finding
is that the combination of three kernel descriptors outperforms any single kernel descriptor. We
note that better performance has been reported with the use of more image features [6]. Our goal
in this paper is to evaluate the strengths of kernel descriptors. To improve accuracy further, kernel
descriptors can be combined with other types of image features.
CIFAR-10. CIFAR-10 is a labeled subset of the 80 million tiny images dataset [25, 12]. This
dataset consists of 60,000 32x32 color images in 10 categories, with 5,000 images per category as
training set and 1,000 images per category as test set. Deep belief networks have been extensively
investigated on this dataset [21, 22]. We extract kernel descriptors over 8?8 image patches per pixel.
Efficient match kernels over the three spatial grids 1 ? 1, 2 ? 2, and 3 ? 3 are used to generate imagelevel features. The resulting feature vectors have a length of (1+4+9)?1000(visual words)= 14000
per kernel descriptor. Linear SVMs are trained due to the large number of training images.
SPM [13]
NBNN [2]
CDBN [14]
SIFT
64.4?0.5
73.0
65.5
70.8?0.8
kCNN [28]
IPSD [10]
LLC [26]
SIFT(M)
67.4
56.0
73.4 ?0.5
73.2?0.5
KDES-C
KDES-G
KDES-S
KDES-A
40.8?0.9
73.3?0.6
68.2?0.7
74.5?0.8
KDES-C(M)
KDES-G(M)
KDES-S(M)
KDES-A(M)
42.4?0.5
75.2?0.4
70.3?0.6
76.4?0.7
Table 2: Comparisons on Caltech-101. Kernel descriptors are compared to recently published results. (M) indicates that features are extracted with multiple image patch sizes.
7
LR
SVM
GIST[20]
SIFT
36.0
39.5
54.7
65.6
GRBM, ZCAd images
GRBM
fine-tuning GRBM
GRBM two layers
59.6
63.8
64.8
56.6
mRBM
cRBM
mcRBM
mcRBM-DBN
59.7
64.7
68.3
71.0
KDES-C
KDES-G
KDES-S
KDES-A
53.9
66.3
68.2
76.0
Table 3: Comparisons on CIFAR-10. Both logistic regression and SVMs are trained over image
pixels.
Methods
Laplacian kernel SVMs
SIFT
66.5 ?0.4
KDES-C
56.4?0.8
KDES-G
69.0 ?0.8
KDES-S
70.5?0.7
KDES-A
75.2?0.7
Table 4: Comparisons on CIFAR10-ImageNet, subset of ImageNet using the 10 CIFAR categories.
We compare our kernel descriptors to deep networks [14, 9] and several baselines in table 3. One
immediate observation is that sophisticated feature extractions are significantly better than raw pixel
features. Linear logistic regression and linear SVMs over raw pixels only have accuracies of 36%
and 39.5%, respectively, over 30 percent lower than deep belief networks and our kernel descriptors.
SIFT features still work well on tiny images and have an accuracy of 65.2%. Color kernel descriptor,
KDES-C, has 53.9% accuracy. This result is a bit surprising since each category has a large color
variation. A possible explanation is that spatial information can help a lot. To validate our intuitions, we also evaluated the color kernel descriptor without spatial information (kernel features are
extracted on 1 ? 1 spatial grid), and only obtained 38.5% accuracy, 18 percent lower than the color
kernel descriptor over pyramid spatial grids. KDES-G is slightly better than SIFT features. The
shape kernel feature, KDES-S, has accuracy of 68.2%, and is the best single feature on this dataset.
Combing the three kernel descriptors, we obtain the best performance of 76%, 5 percent higher than
the most sophisticated deep network mcRBM-DBN, which model pixel mean and covariance jointly
using factorized third-order Boltzmann machines.
CIFAR-10-ImageNet. Motivated by CIFAR-10, we collect a labeled subset of ImageNet [4] by
retrieving 10 categories used in ImageNet: Airplane, Automobile, Bird, Cat, Deer, Dog, Frog, Horse,
Ship and Truck. The total number of images is 15,561 with more than 1,200 images per category.
This dataset is very challenging due to the following facts: multiple objects can appear in one image,
only a small part of objects are visible, backgrounds are cluttered, and so on. We train models on
1,000 images per class and test on 200 images per category. We report the averaged results over 10
random training/test splits in Table 4. We can?t finish running deep belief networks in a reasonable
time since they are slow for running images of this scale. Both gradient and shape kernel descriptors
achieve higher accuracy than SIFT features, which again confirms that our gradient kernel descriptor
and shape kernel descriptor outperform SIFT features on high resolution images with the same
category as CIFAR-10. We also ran the experiments on the downsized images, no larger than 50?50
with preserved ratio. We observe that the accuracy drops 4-6 percents compared to those on high
resolution images. This validates that high resolution is helpful for object recognition.
5
Conclusion
We have proposed a general framework, kernel descriptors, to extract low-level features from image
patches. Our approach is able to turn any pixel attribute into patch-level features in a unified and
principled way. Kernel descriptors are based on the insight that the inner product of orientation histograms is a particular match kernel over image patches. We have performed extensive comparisons
and confirmed that kernel descriptors outperform both SIFT features and hierarchical feature learning, where the former is the default choice for object recognition and the latter is the most popular
low-level feature learning technique. To our best knowledge, we are the first to show how kernel
methods can be applied for extracting low-level image features and show superior performance. This
opens up many possibilities for learning low-level features with other kernel methods. Considering
the huge success of kernel methods in the last twenty years, we believe that this direction is worth
being pursued. In the future, we plan to investigate alternative kernels for low-level feature learning
and learn pixel attributes from large image data collections such as ImageNet.
8
References
[1] L. Bo and C. Sminchisescu. Efficient Match Kernel between Sets of Features for Visual Recognition. In NIPS, 2009.
[2] O. Boiman, E. Shechtman, and M. Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008.
[3] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
[5] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale,
deformable part model. In CVPR, 2008.
[6] P. Gehler and S. Nowozin. On feature combination for multiclass object classification. In
ICCV, 2009.
[7] K. Grauman and T. Darrell. The pyramid match kernel: discriminative classification with sets
of image features. In ICCV, 2005.
[8] D. Haussler. Convolution kernels on discrete structures. Technical report, 1999.
[9] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
[10] K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through
topographic filter maps. In CVPR, 2009.
[11] R. Kondor and T. Jebara. A kernel between sets of vectors. In ICML, 2003.
[12] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
[13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for
recognizing natural scene categories. In CVPR, 2006.
[14] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional deep belief networks for scalable
unsupervised learning of hierarchical representations. In ICML, 2009.
[15] F. Li, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE PAMI, 2006.
[16] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60:91?110, 2004.
[17] S. Lyu. Mercer kernels for object recognition with local features. In CVPR, 2005.
[18] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE PAMI,
27(10):1615?1630, 2005.
[19] T. Ojala, M. Pietik?ainen, and T. M?aenp?aa? . Multiresolution gray-scale and rotation invariant
texture classification with local binary patterns. IEEE PAMI, 24(7):971?987, 2002.
[20] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the
spatial envelope. IJCV, 42(3):145?175, 2001.
[21] M. Ranzato, Krizhevsky A., and G. Hinton. Factored 3-way restricted boltzmann machines for
modeling natural images. In AISTATS, 2010.
[22] M. Ranzato and G. Hinton. Modeling pixel means and covariances using factorized third-order
boltzmann machines. In CVPR, 2010.
[23] B. Sch?olkopf, A. Smola, and K. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10:1299?1319, 1998.
[24] S. Smale, L. Rosasco, J. Bouvrie, A. Caponnetto, and T. Poggio. Mathematics of the neural
response. Foundations of Computational Mathematics, 10(1):67?91, 2010.
[25] A. Torralba, R. Fergus, and W. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE PAMI, 30(11):1958?1970, 2008.
[26] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Guo. Locality-constrained linear coding for
image classification. In CVPR, 2010.
[27] J. Wu and J. Rehg. Beyond the euclidean distance: Creating effective visual codebooks using
the histogram intersection kernel. 2002.
[28] K. Yu, W. Xu, and Y. Gong. Deep learning with kernel regularization for visual recognition.
In NIPS, 2008.
9
| 4147 |@word kondor:1 dalal:1 norm:1 triggs:1 open:1 km:2 confirms:1 tried:1 rgb:2 decomposition:3 covariance:2 brightness:1 shot:1 shechtman:1 reduction:1 contains:1 tuned:2 denoting:1 past:1 outperforms:2 existing:1 discretization:1 surprising:2 si:1 written:1 gpu:1 subsequent:1 visible:1 shape:21 designed:2 drop:2 gist:1 ainen:1 v:1 alone:1 pursued:1 selected:1 lr:1 provides:3 become:3 retrieving:1 consists:3 ijcv:2 manner:2 introduce:2 crbm:1 frequently:1 multi:1 discretized:1 freeman:1 window:1 considering:2 increasing:1 provided:1 underlying:1 moreover:1 laptop:1 factorized:2 what:1 unified:5 finding:2 transformation:1 guarantee:2 grauman:1 k2:4 wrong:1 classifier:1 ramanan:1 appear:1 arguably:1 positive:1 local:13 aggregating:1 pami:4 black:1 bird:1 frog:1 collect:1 challenging:2 co:4 ease:1 jarrett:1 averaged:3 practical:1 lecun:2 yj:2 testing:1 block:1 definite:1 procedure:1 significantly:2 projection:3 matching:2 word:3 regular:2 suggest:1 get:1 convenience:1 close:1 unc:1 selection:3 www:1 equivalent:3 map:3 demonstrated:1 center:3 yt:2 optimize:1 straightforward:2 scene15:1 cluttered:1 resolution:4 simplicity:1 x32:1 factored:1 insight:2 haussler:1 spanned:1 datapoints:1 retrieve:1 rehg:1 classic:1 grbm:4 variation:4 recognition:20 expensive:2 approximated:1 labeled:2 binning:9 database:1 gehler:1 wang:1 capture:3 region:3 ranzato:4 ran:1 principled:5 intuition:2 trained:3 raise:1 depend:1 predictive:1 serve:1 distinctive:1 pietik:1 efficiency:1 basis:20 easily:1 joint:1 various:1 cat:1 train:3 effective:2 describe:4 kp:14 horse:1 deer:1 neighborhood:1 exhaustive:1 larger:4 cvpr:9 otherwise:2 topographic:1 jointly:2 validates:1 advantage:1 eigenvalue:1 reconstruction:1 lowdimensional:1 ipsd:2 product:5 relevant:1 rapidly:1 holistic:1 achieve:2 representational:1 deformable:1 multiresolution:1 kh:2 validate:3 everyday:1 olkopf:1 seattle:5 darrell:1 produce:1 object:18 help:2 derive:1 gong:1 nearest:2 ij:4 eq:7 strong:1 c:1 indicate:1 direction:1 attribute:11 filter:1 kb:5 centered:1 human:1 mcallester:1 bin:2 extension:1 around:2 ground:2 exp:4 nbnn:2 lyu:1 torralba:2 fh:4 purpose:1 bag:2 largest:1 weighted:1 uller:1 gaussian:6 always:2 rather:2 resized:1 conjunction:1 derived:1 validated:1 ponce:1 indicates:1 baseline:1 helpful:1 dependent:1 i0:2 typically:1 perona:1 kc:7 transformed:1 pixel:40 issue:2 classification:7 orientation:28 plan:1 spatial:12 constrained:5 art:1 equal:1 construct:3 extraction:2 washington:2 sampling:2 manually:1 kdes:39 represents:1 ng:1 yu:2 icml:2 unsupervised:1 future:1 report:7 few:1 oriented:2 randomly:2 densely:1 individual:1 detection:3 huge:1 highly:2 ijst:1 possibility:1 cdbn:3 evaluation:2 certainly:1 investigate:1 introduces:2 pc:1 accurate:3 cifar10:7 poggio:1 respective:1 fox:1 euclidean:1 e0:1 weighs:1 column:1 soft:6 modeling:3 measuring:2 kpca:6 cost:1 introducing:1 deviation:1 subset:5 uniform:2 krizhevsky:2 successful:2 recognizing:1 too:1 reported:2 varies:1 combined:1 fundamental:1 lee:1 dong:1 again:1 choose:4 rosasco:1 huang:1 creating:1 leading:1 combing:1 li:3 coding:5 includes:1 performed:2 view:12 root:1 lab:2 lot:1 lowe:1 red:4 reached:1 rmse:4 contribution:2 square:1 publicly:2 accuracy:22 convolutional:3 descriptor:57 efficiently:1 boiman:1 generalize:1 raw:3 bayesian:1 kavukcuoglu:2 accurately:2 ren:1 confirmed:1 apple:1 researcher:2 published:3 app:3 worth:1 suffers:1 pp:2 involved:1 naturally:1 associated:1 sampled:2 dataset:8 popular:4 color:14 knowledge:4 dimensionality:7 organized:1 carefully:1 sophisticated:4 higher:4 response:2 formulation:1 evaluated:1 box:1 though:1 generality:1 stage:2 smola:1 hand:2 nonlinear:2 multiscale:1 liefeng:1 spm:4 logistic:2 gray:2 believe:1 usa:3 building:1 normalized:10 requiring:1 former:1 regularization:1 spatially:1 irani:1 mug:1 sin:3 indistinguishable:1 percent:7 image:89 lazebnik:2 novel:3 recently:2 superior:4 common:1 rotation:1 million:2 extend:1 xi0:2 emk:2 ai:1 tuning:1 grid:15 mathematics:3 dbn:2 dot:1 similarity:11 cortex:1 etc:2 own:2 ship:1 keyboard:1 certain:1 binary:10 success:2 caltech:5 seen:1 minimum:1 fortunately:1 eo:2 deng:1 smoother:1 multiple:6 keypoints:1 caponnetto:1 technical:2 match:34 cifar:7 y:2 laplacian:6 scalable:1 basic:1 ko:23 regression:2 vision:4 oliva:1 histogram:15 kernel:176 normalization:1 represent:1 pyramid:6 preserved:2 addition:1 background:2 fine:2 separately:1 spacing:1 singular:2 sch:1 envelope:1 rest:1 unlike:1 virtually:1 integer:1 extracting:2 yang:1 split:3 easy:1 xj:4 finish:1 architecture:2 suboptimal:1 inner:3 reduce:1 codebooks:1 airplane:1 multiclass:1 grad:1 t0:1 whether:1 motivated:2 bottleneck:1 defense:1 cause:1 matlab:1 deep:11 se:3 nonparametric:1 ten:1 extensively:2 svms:11 category:19 reduced:2 generate:2 http:1 outperform:4 notice:1 neuroscience:1 delta:1 per:10 blue:4 diverse:1 discrete:1 hyperparameter:3 key:1 four:2 cutoff:1 rewriting:1 rectangle:2 year:2 convert:1 angle:2 soda:1 family:1 reasonable:1 wu:1 patch:37 comparable:1 bit:1 layer:2 pct:1 followed:1 truck:1 strength:1 orthogonality:1 fei:2 scene:11 fourier:1 performing:1 structured:1 combination:6 slightly:1 projecting:2 invariant:4 iccv:3 restricted:1 dieter:1 computationally:2 turn:5 know:1 available:2 observe:2 hierarchical:4 alternative:2 original:3 running:2 include:1 binarizes:1 coffee:1 tensor:2 question:1 ys0:2 exhibit:1 gradient:44 dp:6 distance:7 mrbm:1 kcnn:1 water:1 code:1 length:1 ratio:2 hog:8 smale:1 bouvrie:1 design:8 implementation:1 motivates:1 boltzmann:3 twenty:1 perform:2 observation:1 convolution:1 datasets:1 benchmark:4 finite:9 beat:1 immediate:2 hinton:2 banana:1 y1:1 jebara:1 intensity:4 introduced:1 complement:1 bottle:1 dog:1 extensive:2 kgrad:7 imagenet:12 cksvd:2 learned:5 boost:4 nip:2 able:1 beyond:2 below:1 pattern:8 usually:1 summarize:1 including:2 max:1 green:2 belief:7 explanation:1 critical:1 natural:2 indicator:3 improve:2 extract:3 naive:1 schmid:2 understanding:1 l2:7 loss:1 expect:1 highlight:2 discriminatively:1 interesting:2 lv:1 foundation:2 sufficient:5 s0:2 mercer:1 principle:1 tiny:5 nowozin:1 yt0:1 caltech101:1 last:1 truncation:1 infeasible:1 understand:1 neighbor:2 felzenszwalb:1 sparse:3 overcome:1 default:1 dimension:1 finitedimensional:1 evaluating:1 curve:1 rich:1 kz:1 computes:1 llc:1 commonly:2 collection:2 mikolajczyk:1 ranganath:1 approximate:4 compact:9 obtains:1 keep:1 conclude:1 xi:6 discriminative:1 fergus:3 grayscale:2 search:2 table:8 learn:5 sminchisescu:1 investigated:1 automobile:1 aistats:1 dense:3 x1:2 xu:1 crafted:1 intel:2 fig:5 grosse:1 bxc:1 darker:1 slow:1 sub:1 position:7 concatenating:1 lie:1 weighting:1 extractor:1 third:2 xiaofeng:1 sift:34 experimented:1 svm:3 normalizing:1 essential:1 consist:1 quantization:2 socher:1 effectively:1 texture:1 magnitude:6 mcrbm:3 downsized:1 margin:3 locality:3 intersection:1 simply:2 appearance:1 visual:9 expressed:1 bo:2 aa:1 truth:2 extracted:4 goal:2 presentation:1 replace:2 feasible:1 hard:2 infinite:3 typical:1 uniformly:1 except:1 surpass:1 principal:6 total:1 experimental:3 exception:1 select:1 support:3 ojala:1 latter:1 guo:1 evaluate:4 scratch:1 |
3,476 | 4,148 | Joint Cascade Optimization Using a Product of
Boosted Classifiers
Franc?ois Fleuret
Idiap Research Institute
Martigny, Switzerland
[email protected]
Leonidas Lefakis
Idiap Research Institute
Martigny, Switzerland
[email protected]
Abstract
The standard strategy for efficient object detection consists of building a cascade
composed of several binary classifiers. The detection process takes the form of a
lazy evaluation of the conjunction of the responses of these classifiers, and concentrates the computation on difficult parts of the image which cannot be trivially
rejected.
We introduce a novel algorithm to construct jointly the classifiers of such a cascade, which interprets the response of a classifier as the probability of a positive
prediction, and the overall response of the cascade as the probability that all the
predictions are positive. From this noisy-AND model, we derive a consistent loss
and a Boosting procedure to optimize that global probability on the training set.
Such a joint learning allows the individual predictors to focus on a more restricted
modeling problem, and improves the performance compared to a standard cascade. We demonstrate the efficiency of this approach on face and pedestrian detection with standard data-sets and comparisons with reference baselines.
1
Introduction
Object detection remains one of the core objectives of computer vision, either as an objective per
se, for instance for automatic focusing on faces in digital cameras, or as means to get high-level
understanding of natural scenes for robotics and image retrieval.
The standard strategy which has emerged for detecting objects of reasonable complexity such as
faces is the so-called ?sliding-window? approach. It consists of visiting all locations and scales in
the scene to be parsed, and for any such pose, evaluating a two-class predictor which computes if
the object of interest is visible there.
The computational cost of such approaches is controlled traditionally with a cascade, that is a succession of classifiers, each one being evaluated only if the previous ones in the sequence have not
already rejected the candidate location. Such an architecture concentrates the computation on difficult parts of the global image to be processed, and reduces tremendously the overall computational
effort.
In its original form, this approach constructs classifiers one after another during training, each one
from examples which have not been rejected by the previous ones. While very successful, this
technique suffers from three main practical drawbacks. The first one is the need for a very large
number of negative samples, so that enough samples are available to train any one of the classifiers.
The second drawback is the necessity to define as many thresholds as there are levels in the cascade.
This second step may seem innocuous, but in practice is a serious difficulty, requiring additional
validation data. Finally the third drawback is the inability of a standard cascade to properly exploit
1
the trade-off between the different levels. A response marginally below threshold at a certain level
is enough to reject a sample, even if classifiers at other levels have strong responses.
At a more conceptual level, standard training for cascades does not allow the classifiers to exploit
their joint modeling: Each classifier is trained as if it has to do the job alone, without having the
opportunity to properly balance its own modeling effort and that of the other classifiers.
The novel approach we propose here is a joint learning of the classifiers constituting a cascade.
We interpret the individual responses of the classifiers as probabilities of responding positively,
and define the overall response of the cascade as the probability of all the classifiers responding
positively under an assumption of independence. Instead of training classifiers successively, we
directly minimize a loss taking into account this global response. This noisy-AND model leads to a
very simple criterion for a new Boosting procedure, which improves all the classifiers symmetrically
on the positive samples, and focuses on improving the classifier with the best response on every
negative sample.
We demonstrate the efficiency of this technique for face and pedestrian detection. Experiments
show that this joint cascade learning requires far less negative training examples, and achieves performance better than standard cascades without the need for intensive bootstrapping. At the computational level, we propose to optimally permute the order of the classifiers during the evaluation to
reduce the overall number of evaluated classifiers, and show that such optimization allows for better
error rates at similar computational costs.
2
Related works
A number of methods have been proposed over the years to control the computational cost of
machine-learning based object detection. The idea common to these approaches is to rely on a
form of adaptive testing : only candidates which cannot be trivially rejected as not being the object
of interest will require heavy computation. In practice the majority of the candidates will be rejected
with a very coarse criterion, hence requiring very low computation.
2.1
Reducing object detection computational cost
Heisele et al. [1] propose a hierarchy of linear Support Vector Machines, each trained on images of
increasing resolution, to weed out background patches, followed by a final computationally intensive
polynomial SVM. In [2] and [3], the authors use an hierarchy of respectively two and three Support
Vector Machines of increasing complexity. Graf et al. [4] introduced the parallel support vector
machine which creates a filtering process by combining layers of parallel SVMs, each trained using
the support vectors of classifiers in the previous layer.
Fleuret and Geman [5] introduce a hierarchy of classifiers dedicated to positive populations with geometrical poses of decreasing randomness. This approach generalizes the cascade to more complex
pose spaces, but as for cascades, trains the classifiers separately.
Recently, a number of scanning alternatives to sliding window have also been introduced. In [6] a
branch and bound approach is utilized during scanning, while in [7] a divide and conquer approach
is proposed, wherein regions in the image are either accepted or rejected as a whole or split and
further processed. Feature-centric approaches is proposed by the authors in [8] and [9].
The most popular approach however, for both its conceptual simplicity and practical efficiency, is
the attentional cascade proposed by Viola and Jones [10]. Following this seminal paper, cascades
have been used in a variety of problems [11, 12, 13].
2.2
Improving attentional cascades
In recent years approaches have been proposed that address some of the issues we list in the introduction. In [14] the authors train a cascade with a global performance criteria and a single set of
parameters common to all stages. In [15] the authors address the asymmetric nature of the stage
goals via a biased minimax probability machine, while in [16] the authors formulate the stage goals
as a constrained optimization problem. In [17] a alternate boosting method dubbed FloatBoost is
proposed. It allows for backtracking and removing weak classifiers which no longer contribute.
2
Table 1: Notation
(xn , yn ), n = 1, . . . , N , training examples.
K number of levels in the cascade.
fk (x) non-thresholded response of classifier k. During training, fkt (x) stands for that response
after t steps of Boosting.
1
pk (x) = 1+exp(?f
probability of classifier k to response positively on x. During training,
k (x))
t
pk (x) stands for the same value after t steps of Boosting, computed from fkt (x).
Q
p(x) = k pk (x) posterior probability of sample x to be positive, as estimated jointly by all the
classifiers of the cascade. During training, pt (x) is that value after only t steps of Boosting,
computed from the ptk (x).
Sochman and Matas [18] presented a Boosting algorithm based on sequential probability ratio tests,
minimizing the average evaluation time subject to upper bounds on the false negative and false positive rates. A general framework for probabilistic boosting trees (of which cascades are a degenerated
case) was proposed in [19]. In all these methods however, a set of free parameters concerning detection and false alarm performances must be set during training. As will be seen, our method is
capable of postponing any decisions concerning performance goals until after training.
The authors in [20] use the output of each stage as an initial weak classifier of the boosting classifier
in the next stage. This allows the cascade to retain information between stages. However this
approach only constitutes a backward view of the cascade. No information concerning the future
performance of the cascade is available to each stage. In [21] sample traces are utilized to keep track
of the performance of the cascade on the training data, and thresholds are picked after the cascade
training is finished. This allows for reordering of cascade stages. However besides a validation set,
a large number of negative examples must also be bootstrapped not only during the training phase,
but also during the post-processing step of threshold and order calibration. Furthermore, different
learning targets are used in the learning and calibration phases.
To our knowledge, very little work has been done on the joint optimization of the cascaded stages. In
[22] the authors attempt to jointly optimize a cascade of SVMs. As can be seen, a cascade effectively
performs an AND operation over the data, enforcing that a positive example passes all stages; and
that a negative example be rejected by at least one stage. In order to simulate this behavior, the
authors attempt to minimize the maximum hinge loss over the SVMs for the positive examples, and
to minimize the product of the hinge losses for the negative examples. An approximate solution to
this formulation is found via cyclic optimization. In [23] the authors present a method similar to
ours, jointly optimizing a cascade using the product of the output of individual logistic regression
base classifiers. Their method attempts to find the MAP-estimate of the optimal classifier weights
using cyclic coordinate descent. As is the case with the work in [22], the authors consider the
ordering of the stages a priori fixed.
3
Method
Our approach can be interpreted as a noisy-AND: The classifiers in the cascade produce stochastic
Boolean predictions, conditionally independent given the signal to classify. We define the global
response of the cascade as the probability that all these predictions are positive.
This can be interpreted as if we were first computing from the signal x, for each classifier in the
cascade, a probability pk (x), and defining the response of the cascade as the probability that K
independent Bernoulli variables of parameters p1 (x), . . . , pK (x) would all be equal to 1. Such a
criterion takes naturally into account the confidence of individual classifiers in the final response,
and introduces an additional non-linearity in the decision function.
This approach is related to the noisy-OR proposed in [24] for multi-view object detection. However, their approach aims at decomposing a complex population into a collection of homogeneous
populations, while our objective is to speed up the computation for the detection of a homogeneous
3
population. In some sense the noisy-OR they propose and the noisy-AND we use for training are
addressing dual objectives.
3.1
Formalization
Let fk (x) stand for the non-thresholded response of the classifier at level k of the cascade. We define
1
pk (x) =
(1)
1 + exp(?fk (x))
as the probabilistic interpretation of the deterministic output of classifier k.
From that, we define the final output of the cascade as the probability that all classifiers make positive
predictions, under the assumption that they are conditionally independent, given x
K
Y
p(x) =
pk (x).
(2)
k=1
In the ideal Boolean case, an example x will be classified as positive if and only if all classifiers
classify it as such. Conversely the example will be classified as negative if pk (x) = 0 for at least
one k. This is consistent with the AND nature of the cascade. Of course due to the product, the final
classifier is able to make probabilistic predictions rather than solely hard ones as in [22].
3.2
Joint Boosting
Let
(xn , yn ) ? Rd ? {0, 1}, n = 1, . . . , N
(3)
denote a training set. In order to train our cascade we consider the maximization of the joint maximum log likelihood of the data:
Y
y
J = log
p(xn ) n (1 ? p(xn ))1?yn .
(4)
n
At each round t we sequentially visit each classifier and add a weak learner which locally minimizes
J the most. If pt (x) denotes the overall response of the cascade after having added t weak learners
in each classifier, and ptk (x) denotes the response of classifier k at that point ? hence a function the
response of classifier k at step t, fkt (x) ? the score to maximize to select a weak learner hkt (xn ) is:
X
wnk,t hkt (xn )
(5)
n
with
wnk,t =
yn ? pt (xn )
?J
=
(1 ? ptk (xn )).
?fk (xn )
1 ? pt (xn )
(6)
It should be noted that in this formulation, the weight wnk,t are signed, and these assigned to negative
examples are negative.
In the case of a positive example xn this simplifies to wnk,t = 1 ? ptk (xn ) and thus this criterion
pushes every classifier in the cascade to maximize the response on positive samples, irrespective of
the performance of the overall cascade.
t
?p (xn )
In the case of a negative example however, the weight update rule becomes wnk,t = 1?p
t (x ) (1 ?
n
t
pk (xn )), each classifier in the cascade is then passed information regarding the overall performance
?pt (xn )
via the term 1?p
If the cascade is already rejecting the negative example, then this term
t (x ) .
n
becomes 0 and the classifier ignores its performance on the specific example. On the other hand, if
the cascade is performing poorly, then the term becomes increasingly large and the classifiers put
large weights on that example.
Furthermore, due to the term 1 ? ptk (xn ), each classifier puts larger weight on negative examples
that it is already performing well on, effectively partitioning the space of negative examples.
The weights of the weak-learners can not be computed in a close formed as for AdaBoost and are
estimated through a numerical line-search.
4
3.3
Exponential variant
To assess if the asymptotic behavior of the loss ? which is similar in spirit to the logistic one ? is
critical or not in the performance, we also experimented the minimization of the exponential error
of the output.
This translates to the minimization of the cost function :
X 1 ? p(xn ) 2yn ?1
exp
J
=
p(xn )
n
(7)
and leads to the following expression for the sample weights during Boosting:
wnk,t =
ptk (xn ) ? 1
pt (xn )
(8)
for the positive samples and
wnk,t =
(1 ? ptk (xn )) pt (xn )
(1 ? pt (xn ))
2
(9)
for the negative ones.
Such a weighting strongly penalizes outliers in the training set, in a manner similar to Adaboost?s
exponential loss.
4
Experiments
4.1
Implementation Details
We comparatively evaluate the proposed cascade framework on two data-sets. In [10] the authors
present an initial comparison between their cascade framework and an AdaBoost classifier on the
CMU-MIT data-set. They train the monolithic classifier for 200 rounds and compare it against a
simple cascade containing ten stages, each with 20 weak learners. As cascade architecture plays
an important role in the final performance of the cascade, and in order to avoid any issues in the
comparison pertaining to architectural designs, we keep this structure and evaluate both the proposed cascade and the Viola and Jones cascade, using this architecture. The monolithic classifier is
similarly trained for 200 rounds. During the training, the thresholds for each stage in the Viola and
Jones cascade are set to achieve a 99.5% detection rate.
As pointed out, our approach does not make use of a validation set, nor uses bootstrapping during
training. We experimented with bootstrapping a fixed number M of negative examples at fixed
intervals, similar to [21] and attained higher performance than the one presented here. However it
was found that training, was highly sensitive to the choice of M and that furthermore this choice of
M was application specific.
We tested three versions of our JointCascade approach: JointCascade is the algorithm described
in ? 3.2, JointCascade Augmented is the same, but is trained with as many negative examples as
the total number used by the Viola and Jones cascade, and JointCascade Exponential uses the
same number of negative samples as the basic setting, but uses the exponential version of the loss
described in ? 3.3.
4.2
4.2.1
Data-Sets
Pedestrians
For pedestrian detection we use the INRIA pedestrian data-set [25], which contains pedestrian images of various poses with high variance concerning background and lighting. The training set
consists of 1239 images of pedestrians as positive examples, and 12180 negative examples, mined
from 1218 pedestrian-free images. Of these we keep 900 images for training (together with their
mirror images, for a total of 1800) and 9000 negative examples. The remaining images in the original training set are put aside to be used as a validation set by the Viola and Jones cascade.
5
As in [25] we utilize a histogram of oriented gradient to describe each image. The reader is referred
to this article for implementation details of the descriptor.
The trained classifiers are then tested on a test set composed of 1126 images of pedestrians and
18120 non-pedestrian images.
4.2.2
Faces
For faces, we evaluate against the CMU+MIT data-set of frontal faces. We utilize the Haar-like
wavelet features introduced in [10], however, for performance reasons, we sub-sample 2000 of these
features at each round to be used for training.
For training we use the same data-set as that used by Viola and Jones consisting of 4916 images of
faces. Of these we use 4000 (plus their mirror images) for training and set apart a further 916 (plus
mirror images) for use as the validation set needed by the classical cascade approach. The negative
portion of the training set is comprised of 10000 non-face images, mined randomly from non-face
containing images.
In order to test the trained classifiers, we extract the 507 faces in the data-set and scale-normalize
to 24x24 images, a further 12700 non-face image patches are extracted from the background of the
images in the data-set. We do not perform scale search, nor do we use any form of post-processing.
4.2.3
Bootstrap Images
As, during training, the Viola and Jones cascade needs to bootstrap false positive examples after each
stage, we randomly mine a data-set of approximately 7000 images from the web. These images have
been manually inspected to ensure that they do not contain either faces or pedestrians. These images
are used for bootstrapping in both sets of experiments.
4.3
Error rate
The evaluation on the face data-set can be seen in Figure 1. The plotted lines represent the ROC
curves for the evaluated methods. The proposed methods are able to reach a level of performance
on par with the Viola and Jones cascade, without the need for a validation set or bootstrapping. The
log-likelihood version of our method, performs slightly better than the exponential error version.
The ROC curves for the pedestrian detection task can be seen in Figure 2. The log-likelihood version
of our method significantly outperforms the Viola and Jones Cascade. The exponential error version
is again slightly worse than the log-likelihood version, however this too outperforms the classical
approach. Finally, as can be seen, augmenting the training data for the proposed method, leads to
further improvement.
The results on the two data-sets show that the proposed methods are capable of performing on
par or better than the Viola and Jones cascade, while avoiding the need for a validation set or for
bootstrapping. This lack of a need for bootstrapping, further means that the training time needed is
considerably smaller than in the case of the classical cascade.
4.4
Optimization of the evaluation order
As stated, one of the main motivations for using cascades is speed. We compare the average number
of stages visited per negative example for the various methods presented.
Typically in cascade training, the thresholds and orders of the various stages must be determined
during training, either by setting them in an ad hoc manner or by using one of the optimization
schemes of the many proposed. In our case however, any decision concerning the thresholds as well
as the ordering of the stages can be postponed till after training. It is easy to derive for any given
detection goal, a relevant threshold ? on the overall cascade responce. Thus we ask that p(xn ) > ?,
for an image patch to be accepted as positive. Subsequently the image patch will be rejected if the
product of any subset of strong classifiers has a value smaller than ?.
Based on this we use a greedy method to evaluate, using the original training set, the optimal order
of classifiers as follows : Originally we chose as the first stage in our cascade, the classifier whose
6
Faces
1
True-positive rate
0.95
0.9
Non-cascade AdaBoost
VJ cascade
JointCascade
JointCascade Augmented
JointCascade Exponential
0.85
0.8
0
0.01
0.02
0.03
False-positive rate
0.04
0.05
Figure 1: True-positive rate vs. false-positive rate on the face data-set for the methods proposed,
AdaBoost and the Viola and Jones type cascade. The JointCascade variants are described in ? 4.1.
At any true-positive rate above 95%, all three methods perform better than the standard cascade.
This is a particularly good result for the basic JointCascade which does not use bootstrapping during
training, which would seem to be critical for such conservative regimes.
Pedestrians
1
True-positive rate
0.95
0.9
Non-cascade AdaBoost
VJ cascade
JointCascade
JointCascade Augmented
JointCascade Exponential
0.85
0.8
0
0.02
0.04
0.06
False-positive rate
0.08
0.1
Figure 2: True-positive rate vs. false-positive rate on the pedestrian data-set for the methods proposed, AdaBoost and the Viola and Jones type cascade. All three JointCascade methods outperform
the standard cascade, for regions of the false positive rate which are of practical use.
7
Table 2: Average number of classifiers evaluated on a sample, for each method and different truepositive rates, on the two data-sets. As expected, the computational load increases with the accuracy.
The JointCascade variants require marginally more operations at a fixed rate on the pedestrian population, and marginally less on the faces except at very conservative rates. This is an especially good
result, given their lower false-positive rates, which should induce more computation on average.
TP
95%
90%
86%
82%
78%
VJ
1.35
1.21
1.13
1.10
1.07
Computational cost (faces)
JointCascade JointCascade
JointCascade
Augmented Exponential
1.49
1.62
1.69
1.18
1.31
1.25
1.09
1.18
1.11
1.04
1.12
1.07
1.03
1.09
1.04
VJ
2.27
1.93
1.56
1.38
1.30
Computational cost (pedestrians)
JointCascade JointCascade
JointCascade
Augmented Exponential
2.58
2.66
2.93
2.04
1.94
2.21
1.79
1.71
1.81
1.49
1.59
1.52
1.37
1.48
1.39
reponse is smaller than ? for the largest number of negative examples. We then iteratively add to the
order of the cascade, that classifier which leads to a response smaller than ? for the most negative
examples, when multiplied with the aggregated response of the stages already ordered in the cascade.
As stated this ordering of the cascade stages is computed using the training set. We then measure the
speed of our ordered cascade on the same test sets as above, as shown on Table 2. As can be seen, in
the case of the face dataset, in almost all cases our approach is actually faster during scanning than
the classical Viola and Jones approach. When the augmented dataset is used however this speed
advantage is lost, there is a thus a trade-off between performance and speed, as is to be expected.
The speed of our JointCascade approach on the pedestrian data-set is marginally worst than that of
Viola and Jones, which is due to the lower false-positive rates.
5
Conclusion
We have presented a new criterion to train a cascade of classifiers in a joint manner. This approach
has a clear probabilistic interpretation as a noisy-AND, and leads to a global decision criterion which
avoids thresholding classifiers individually, and can exploit independence in the classifier response
amplitudes.
This method avoids the need for picking multiple thresholds and the requirement for additional
validation data. It allows to easily fix the final performance without the need for re-training. Finally,
we have demonstrated that it reaches state-of-the-art performance on standard data sets, without the
need for bootstrapping.
This approach is very promising as a general framework to build adaptive detection techniques. It
could easily be extended to hierarchical approaches instead of simple cascade, hence could be used
for latent poses richer than location and scale.
Finally, the reduction of the computational cost itself could be addressed in a more explicit manner
than the optimization of the order presented in ? 4.4. We are investigating a dynamic approach where
the same criterion is used to allocate weak learners adaptively among the classifiers. This could be
combined with a loss function explicitly estimating the expected computation cost of detection,
hence providing an incentive for early rejection of more samples in the cascade.
Acknowledgments
We thank the anonymous reviewers for their helpful comments. This work was supported by the
European Community?s Seventh Framework Programme FP7 - Challenge 2 - Cognitive Systems,
Interaction, Robotics - under grant agreement No 247022 - MASH.
8
References
[1] B. Heisele, T. Serre, S. Prentice, and T. Poggio. Hierarchical classification and feature reduction for fast
face detection with support vector machines. Pattern Recognition Letters, 36(9):2007?2017, 2003.
[2] Hedi Harzallah, Fr?ed?eric Jurie, and Cordelia Schmid. Combining efficient object localization and image
classification. In International Conference on Computer Vision, pages 237?244, 2009.
[3] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In International Conference on Computer Vision, pages 606?613, 2009.
[4] Hans Peter Graf, Eric Cosatto, L?eon Bottou, Igor Dourdanovic, and Vladimir Vapnik. Parallel support
vector machines: The cascade svm. In Neural Information Processing Systems, pages 521?528, 2005.
[5] F. Fleuret and D. Geman. Coarse-to-fine face detection. International Journal of Computer Vision,
41(1/2):85?107, 2001.
[6] Christopher H. Lampert, M. B. Blaschko, and Thomas Hofmann. Beyond sliding windows: Object localization by efficient subwindow search. In Conference on Computer Vision and Pattern Recognition,
pages 1?8, 2008.
[7] Christoph H. Lampert. An efficient divide-and-conquer cascade for nonlinear object detection. In Conference on Computer Vision and Pattern Recognition, pages 1022?1029, 2010.
[8] Henry Schneiderman. Feature-centric evaluation for efficient cascaded object detection. In Conference
on Computer Vision and Pattern Recognition, pages 29?36, 2004.
[9] A. Lehmann, B. Leibe, and L. Van Gool. Feature-centric efficient subwindow search. In International
Conference on Computer Vision, pages 940?947, 2009.
[10] Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. In
Conference on Computer Vision and Pattern Recognition, pages 511?518, 2001.
[11] Owen T. Carmichael and Martial Hebert. Shape-based recognition of wiry objects. In Conference on
Computer Vision and Pattern Recognition, pages 401?408, 2003.
[12] Qiang Zhu, Shai Avidan, Mei chen Yeh, and Kwang ting Cheng. Fast human detection using a cascade
of histograms of oriented gradients. In Conference on Computer Vision and Pattern Recognition, pages
1491?1498, 2006.
[13] Geremy Heitz, Stephen Gould, Ashutosh Saxena, and Daphne Koller. Cascaded classification models:
Combining models for holistic scene understanding. In Neural Information Processing Systems, pages
641?648, 2009.
[14] S. Charles Brubaker, Jianxin Wu, Jie Sun, Matthew D. Mullin, and James M. Rehg. On the design of
cascades of boosted ensembles for face detection. International Journal of Computer Vision, 77(1-3):65?
86, 2008.
[15] Kaizhu Huang, Haiqin Yang, Irwin King, and Michael R. Lyu. Learning classifiers from imbalanced
data based on biased minimax probability machine. In Conference on Computer Vision and Pattern
Recognition, pages 558?563, 2004.
[16] J. Wu, S. C. Brubaker, M. D. Mullin, and J. M. Rehg. Fast asymmetric learning for cascade face detection.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 30:369?382, 2008.
[17] Stan Z. Li and ZhenQiu Zhang. FloatBoost learning and statistical face detection. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 26(9), 2004.
[18] Jan Sochman and Jiri Matas. Waldboost ? learning for time constrained sequential detection. In Conference on Computer Vision and Pattern Recognition, pages 150?156, 2005.
[19] Zhuowen Tu. Probabilistic boosting-tree: Learning discriminative models for classification, recognition,
and clustering. In International Conference on Computer Vision, pages 1589?1596, 2005.
[20] Rong Xiao, Long Zhu, and HongJiang Zhang. Boosting chain learning for object detection. In International Conference on Computer Vision, pages 709?715, 2003.
[21] Lubomir Bourdev and Jonathan Brandt. Robust object detection via soft cascade. In Conference on
Computer Vision and Pattern Recognition, pages 236?243, 2005.
[22] M. Murat Dundar and Jinbo Bi. Joint optimization of cascaded classifiers for computer aided detection.
In Conference on Computer Vision and Pattern Recognition, pages 1?8, 2007.
[23] V. C. Raykar, B. Krishnapuram, and S. Yu. Designing efficient cascaded classifiers: Tradeoff between
accuracy and cost. In Conference on Knowledge Discovery and Data Mining, 2010.
[24] Tae-Kyun Kim and Roberto Cipolla. MCBoost: Multiple classifier boosting for perceptual co-clustering
of images and visual features. In Neural Information Processing Systems, pages 841?856, 2008.
[25] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Conference on Computer
Vision and Pattern Recognition, pages 886?893, 2005.
9
| 4148 |@word version:7 dalal:1 polynomial:1 triggs:1 reduction:2 initial:2 cyclic:2 contains:1 score:1 necessity:1 bootstrapped:1 ours:1 outperforms:2 jinbo:1 must:3 numerical:1 visible:1 shape:1 hofmann:1 update:1 ashutosh:1 aside:1 alone:1 greedy:1 v:2 intelligence:2 core:1 detecting:1 boosting:14 coarse:2 location:3 contribute:1 sochman:2 brandt:1 daphne:1 zhang:2 jiri:1 consists:3 manner:4 introduce:2 expected:3 rapid:1 behavior:2 p1:1 nor:2 multi:1 decreasing:1 floatboost:2 little:1 window:3 increasing:2 becomes:3 estimating:1 notation:1 linearity:1 blaschko:1 interpreted:2 minimizes:1 dubbed:1 bootstrapping:9 responce:1 every:2 saxena:1 classifier:66 control:1 partitioning:1 grant:1 yn:5 positive:29 monolithic:2 solely:1 approximately:1 signed:1 inria:1 plus:2 chose:1 conversely:1 christoph:1 innocuous:1 co:1 bi:1 jurie:1 practical:3 camera:1 acknowledgment:1 testing:1 practice:2 lost:1 bootstrap:2 procedure:2 heisele:2 mei:1 jan:1 carmichael:1 cascade:87 reject:1 significantly:1 vedaldi:1 confidence:1 induce:1 krishnapuram:1 get:1 cannot:2 close:1 put:3 prentice:1 seminal:1 optimize:2 map:1 deterministic:1 demonstrated:1 reviewer:1 resolution:1 formulate:1 simplicity:1 rule:1 varma:1 rehg:2 population:5 traditionally:1 coordinate:1 hierarchy:3 pt:8 target:1 play:1 inspected:1 homogeneous:2 us:3 designing:1 agreement:1 recognition:14 particularly:1 utilized:2 asymmetric:2 geman:2 role:1 worst:1 region:2 sun:1 ordering:3 trade:2 complexity:2 mine:1 dynamic:1 trained:7 creates:1 localization:2 efficiency:3 learner:6 eric:2 easily:2 joint:10 various:3 train:6 fast:3 describe:1 pertaining:1 whose:1 emerged:1 larger:1 richer:1 tested:2 jointly:4 noisy:7 itself:1 final:6 ptk:7 hoc:1 sequence:1 advantage:1 propose:4 interaction:1 product:5 fr:1 tu:1 relevant:1 combining:3 holistic:1 till:1 poorly:1 achieve:1 normalize:1 requirement:1 francois:1 produce:1 hkt:2 object:17 derive:2 bourdev:1 augmenting:1 pose:5 job:1 strong:2 ois:1 idiap:4 switzerland:2 concentrate:2 drawback:3 stochastic:1 subsequently:1 hedi:1 human:2 require:2 fix:1 anonymous:1 kaizhu:1 rong:1 exp:3 lyu:1 matthew:1 achieves:1 early:1 visited:1 sensitive:1 individually:1 largest:1 minimization:2 mit:2 aim:1 rather:1 avoid:1 boosted:3 conjunction:1 focus:2 properly:2 improvement:1 bernoulli:1 likelihood:4 tremendously:1 baseline:1 sense:1 kim:1 helpful:1 typically:1 koller:1 overall:8 issue:2 dual:1 among:1 classification:4 priori:1 constrained:2 art:1 equal:1 construct:2 having:2 cordelia:1 manually:1 qiang:1 jones:15 yu:1 constitutes:1 igor:1 future:1 serious:1 franc:1 oriented:3 randomly:2 composed:2 individual:4 phase:2 consisting:1 attempt:3 detection:31 interest:2 highly:1 mining:1 evaluation:6 introduces:1 cosatto:1 chain:1 capable:2 poggio:1 tree:2 divide:2 penalizes:1 re:1 plotted:1 mullin:2 instance:1 classify:2 modeling:3 boolean:2 soft:1 tp:1 maximization:1 cost:10 addressing:1 subset:1 predictor:2 comprised:1 successful:1 seventh:1 too:1 optimally:1 scanning:3 considerably:1 combined:1 adaptively:1 international:7 retain:1 probabilistic:5 off:2 picking:1 michael:2 together:1 again:1 successively:1 containing:2 huang:1 worse:1 cognitive:1 li:1 account:2 pedestrian:17 explicitly:1 leonidas:2 ad:1 view:2 picked:1 portion:1 parallel:3 shai:1 jianxin:1 minimize:3 formed:1 ass:1 accuracy:2 gulshan:1 variance:1 descriptor:1 succession:1 ensemble:1 weak:8 rejecting:1 marginally:4 lighting:1 randomness:1 classified:2 reach:2 suffers:1 ed:1 against:2 james:1 naturally:1 dataset:2 popular:1 ask:1 knowledge:2 improves:2 amplitude:1 actually:1 focusing:1 centric:3 attained:1 higher:1 originally:1 adaboost:7 response:23 wherein:1 zisserman:1 reponse:1 formulation:2 evaluated:4 done:1 strongly:1 furthermore:3 rejected:8 stage:21 until:1 hand:1 web:1 christopher:1 nonlinear:1 lack:1 logistic:2 building:1 serre:1 requiring:2 contain:1 true:5 hence:4 assigned:1 iteratively:1 conditionally:2 round:4 during:16 raykar:1 noted:1 criterion:8 demonstrate:2 performs:2 dedicated:1 geometrical:1 image:30 novel:2 recently:1 charles:1 common:2 interpretation:2 interpret:1 automatic:1 rd:1 trivially:2 fk:4 similarly:1 pointed:1 henry:1 calibration:2 han:1 longer:1 base:1 add:2 posterior:1 own:1 recent:1 imbalanced:1 optimizing:1 apart:1 certain:1 binary:1 postponed:1 seen:6 geremy:1 additional:3 waldboost:1 aggregated:1 maximize:2 signal:2 stephen:1 sliding:3 branch:1 multiple:3 reduces:1 faster:1 long:1 retrieval:1 concerning:5 post:2 visit:1 controlled:1 prediction:6 variant:3 regression:1 basic:2 avidan:1 vision:19 cmu:2 histogram:3 represent:1 kernel:1 robotics:2 background:3 separately:1 fine:1 interval:1 addressed:1 biased:2 hongjiang:1 pass:1 comment:1 subject:1 dundar:1 spirit:1 seem:2 symmetrically:1 ideal:1 yang:1 split:1 enough:2 easy:1 zhuowen:1 variety:1 independence:2 architecture:3 interprets:1 reduce:1 idea:1 simplifies:1 regarding:1 lubomir:1 translates:1 intensive:2 tradeoff:1 expression:1 allocate:1 passed:1 effort:2 peter:1 jie:1 fleuret:4 se:1 clear:1 lefakis:2 locally:1 ten:1 processed:2 svms:3 outperform:1 estimated:2 per:2 track:1 incentive:1 threshold:9 thresholded:2 utilize:2 backward:1 year:2 schneiderman:1 letter:1 lehmann:1 almost:1 reasonable:1 reader:1 architectural:1 wu:2 patch:4 decision:4 layer:2 bound:2 followed:1 mined:2 cheng:1 scene:3 simulate:1 speed:6 performing:3 gould:1 alternate:1 smaller:4 slightly:2 increasingly:1 outlier:1 restricted:1 computationally:1 remains:1 needed:2 fp7:1 available:2 generalizes:1 operation:2 decomposing:1 multiplied:1 leibe:1 hierarchical:2 alternative:1 original:3 thomas:1 responding:2 denotes:2 remaining:1 ensure:1 clustering:2 opportunity:1 hinge:2 exploit:3 parsed:1 ting:1 eon:1 especially:1 conquer:2 wnk:7 classical:4 comparatively:1 build:1 objective:4 matas:2 already:4 added:1 strategy:2 visiting:1 gradient:3 attentional:2 thank:1 majority:1 reason:1 enforcing:1 degenerated:1 besides:1 ratio:1 balance:1 minimizing:1 providing:1 vladimir:1 difficult:2 postponing:1 trace:1 negative:24 stated:2 martigny:2 implementation:2 design:2 murat:1 perform:2 upper:1 descent:1 viola:15 defining:1 extended:1 kyun:1 brubaker:2 community:1 introduced:3 address:2 able:2 beyond:1 below:1 pattern:14 regime:1 challenge:1 gool:1 critical:2 mash:1 natural:1 difficulty:1 rely:1 haar:1 cascaded:5 zhu:2 minimax:2 scheme:1 mcboost:1 finished:1 martial:1 irrespective:1 stan:1 extract:1 schmid:1 roberto:1 understanding:2 yeh:1 discovery:1 graf:2 asymptotic:1 loss:8 reordering:1 par:2 filtering:1 digital:1 validation:8 consistent:2 article:1 thresholding:1 xiao:1 heavy:1 course:1 supported:1 free:2 hebert:1 allow:1 institute:2 face:24 taking:1 kwang:1 van:1 curve:2 heitz:1 xn:24 evaluating:1 stand:3 avoids:2 computes:1 ignores:1 author:11 collection:1 adaptive:2 subwindow:2 programme:1 far:1 constituting:1 harzallah:1 transaction:2 approximate:1 keep:3 global:6 sequentially:1 investigating:1 conceptual:2 discriminative:1 search:4 latent:1 table:3 promising:1 nature:2 robust:1 improving:2 permute:1 bottou:1 complex:2 european:1 vj:4 pk:9 main:2 whole:1 motivation:1 alarm:1 lampert:2 paul:1 weed:1 positively:3 augmented:6 referred:1 roc:2 formalization:1 sub:1 explicit:1 exponential:11 x24:1 candidate:3 perceptual:1 third:1 weighting:1 wavelet:1 removing:1 load:1 specific:2 list:1 experimented:2 svm:2 false:11 sequential:2 effectively:2 vapnik:1 mirror:3 push:1 chen:1 rejection:1 backtracking:1 visual:1 lazy:1 ordered:2 fkt:3 cipolla:1 ch:2 extracted:1 goal:4 king:1 owen:1 hard:1 aided:1 determined:1 except:1 reducing:1 conservative:2 called:1 total:2 accepted:2 select:1 support:6 inability:1 irwin:1 jonathan:1 frontal:1 evaluate:4 tae:1 avoiding:1 |
3,477 | 4,149 | Large Margin Learning of Upstream
Scene Understanding Models
?
Jun Zhu?
Li-Jia Li?
Fei-Fei Li?
Eric P. Xing?
?
{junzhu,epxing}@cs.cmu.edu
{lijiali,feifeili}@cs.stanford.edu
?
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213
?
Department of Computer Science, Stanford University, Stanford, CA 94305
Abstract
Upstream supervised topic models have been widely used for complicated
scene understanding. However, existing maximum likelihood estimation (MLE)
schemes can make the prediction model learning independent of latent topic discovery and result in an imbalanced prediction rule for scene classification. This
paper presents a joint max-margin and max-likelihood learning method for upstream scene understanding models, in which latent topic discovery and prediction model estimation are closely coupled and well-balanced. The optimization
problem is efficiently solved with a variational EM procedure, which iteratively
solves an online loss-augmented SVM. We demonstrate the advantages of the
large-margin approach on both an 8-category sports dataset and the 67-class MIT
indoor scene dataset for scene categorization.
1 Introduction
Probabilistic topic models like the latent Dirichlet allocation (LDA) [5] have recently been applied
to a number of computer vision tasks such as objection annotation and scene classification due
to their ability to capture latent semantic compositions of natural images [22, 23, 9, 13]. One of
the advocated advantages of such models is that they do not require ?supervision? during training,
which is arguably preferred over supervised learning that would necessitate extra cost. But with the
increasing availability of free on-line information such as image tags, user ratings, etc., various forms
of ?side-information? that can potentially offer ?free? supervision have led to a need for new models
and training schemes that can make effective use of such information to achieve better results, such
as more discriminative topic representations of image contents, and more accurate image classifiers.
The standard unsupervised LDA ignores the commonly available supervision information, and thus
can discover a sub-optimal topic representation for prediction tasks. Extensions to supervised topic
models which can explore side information for discovering predictive topic representations have
been proposed, such as the sLDA [4, 25] and MedLDA [27]. A common characteristic of these
models is that they are downstream, that is, the supervised response variables are generated from
topic assignment variables. Another type of supervised topic models are the so-called upstream
models, of which the response variables directly or indirectly generate latent topic variables. In
contrast to downstream supervised topic models (dSTM), which are mainly designed by machine
learning researchers, upstream supervised topic models (uSTM) are well-motivated from human
vision and psychology research [18, 10] and have been widely used for scene understanding tasks.
For example, in the recently developed scene understanding models [23, 13, 14, 8], complex scene
images are modeled as a hierarchy of semantic concepts where the most top level corresponds to a
scene, which can be represented as a set of latent objects likely to be found in a given scene. To
learn an upstream scene model, maximum likelihood estimation (MLE) is the most common choice.
However, MLE can make the prediction model estimation independent of latent topic discovery and
result in an imbalanced prediction rule for scene classification, as we explain in Section 3.
1
In this paper, our goal is to address the weakness of MLE for learning upstream supervised topic
models. Our approach is based on the max-margin principle for supervised learning which has
shown great promise in many machine learning tasks, such as classification [21] and structured output prediction [24]. For the dSTM, max-margin training has been developed in MedLDA [27], which
has achieved better prediction performance than MLE. In such downstream models, latent topic assignments are sufficient statistics for the prediction model and it is easy to define the max-margin
constraints based on existing max-margin methods (e.g., SVM). However, for upstream supervised
topic models, the discriminant function for prediction involves an intractable computation of posterior distributions, which makes the max-margin training more delicate.
Specifically, we present a joint max-margin and max-likelihood estimation method for learning upstream scene understanding models. By using a variational approximation to the posterior distribution of supervised variables (e.g., scene categories), our max-margin learning approach iterates
between posterior probabilistic inference and max-margin parameter learning. The parameter learning solves an online loss-augmented SVM, which closely couples the prediction model estimation
and latent topic discovery, and this close interplay results in a well-balanced prediction rule for scene
categorization. Finally, we demonstrate the advantages of our max-margin approach on both the 8category sports [13] and the 67-class MIT indoor scene [20] datasets. Empirical results show that
max-margin learning can significantly improve the scene classification accuracy.
The paper is structured as follows. Sec. 2 presents a generic scene understanding model we will
work on. Sec. 3 discusses the weakness of MLE in learning upstream models. Sec. 4 presents the
max-margin learning approach. Sec. 5 presents empirical results and Sec. 6 concludes.
2
Joint Scene and Object Model: a Generic Running Example
In this section, we present a generic joint scene categorization and object annotation model, which
will be used to demonstrate the large margin learning of upstream scene understanding models.
2.1
Image Representation
How should we represent a scene image? Friedman [10] pointed out that object recognition is
critical in the recognition of a scene. While individual objects contribute to the recognition of visual
scenes, human vision researchers Navon [18] and Biederman [2] also showed that people perform
rapid global scene analysis before conducting more detailed local object analysis when recognizing
scene images. To obtain a generic model, we represent a scene by using its global scene features
and objects within it. We first segment an image I into a set of local regions {r1 , ? ? ? , rN }. Each
region is represented by three region features R (i.e., color, location and texture) and a set of image
patches X. These region features are represented as visual codewords. To describe detailed local
information of objects, we partition each region into patches. For each patch, we extract the SIFT
[16] features, which are insensitive to view-point and illumination changes. To model the global
scene representation, we extract a set of global features G [19]. In our dataset, we represent an
image as a tuple (r, x, g), where r denotes an instance of R, and likewise for x and g.
2.2
The Joint Scene and Object Model
The model is shown in Fig. 1 (a). S is the scene random variable, taking values from a finite set
S = {s1 , ? ? ? , sMs }. For an image, the distribution over scene categories depends on its global
representation features G. Each scene is represented as a mixture over latent objects O and the
mixing weights are defined with a generalized linear model (GLM) parameterized by ?. By using
a normal prior on ?, the scene model can capture the mutual correlations between different objects,
similar to the correlated topic models (CTMs) [3]. Here, we assume that for different scenes, the
objects have different distributions and correlations. Let f denote the vector of real-valued feature
functions of S and G, the generating procedure of an image is as follows:
1. Sample a scene category from a conditional scene model: p(s|g, ?) =
2. Sample the parameters ?|s, ?, ? ? N (?s , ?s ).
3. For each region n
k)
(a) sample an object from: p(on = k|?) = ?exp(?
.
exp(?j )
?
exp(? ? f (g,s)
.
exp(? ? f (g,s? ))
s?
j
(b) sample Mr (i.e., 3: color, location and texture) region features: rnm |on , ? ? Multi(?mon ).
(c) sample Mx image patches xnm |on , ? ? Multi(?on ).
2
Log?likelihood Ratio
R
S
Mr
O
X
Mx
G
N
x 10
?3
MLE
8
6
6
4
4
2
2
0
0
x 10
Max?Margin
0.8
0.75
Scene Classification Accuracy
?3
8
Ms
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
D
1 2 3 4 5 6 7 8
0.3
1 2 3 4 5 6 7 8
1
MLE
2
Max?Margin
(a)
(b)
(c)
Figure 1: (a) a joint scene categorization and object annotation model with global features G; (b) average
log-likelihood ratio log p(s|g, ?)/L?? under MLE and max-margin estimations, where the first bar is for true
categories and the rest are for categories sorted based on their difference from the first one; (c) scene classification accuracy by using (Blue) L?? , (Green) log p(s|g, ?), and (Red) L?? + log p(s|g, ?) for prediction. Group
1 is for MLE and group 2 is for max-margin training.
The generative model defines a joint distribution
p(s, ?, o, r, x|g, ?) = p(s|?, g)p(?|?s , ?s )
Mr
Mx
N
?
?
?
(
)
p(on |?)
p(rnm |on , ?)
p(xnm |on , ?) ,
n=1
m=1
m=1
where we have used ? to denote all the unknown parameters (?, ?, ?, ?, ?). From the joint distribution, we can make two types of predictions, namely scene classification and object annotation. For
scene classification, we infer the maximum a posteriori prediction
s? , arg max p(s|g, r, x) = arg max log p(s, r, x|g).
s
s
(1)
For object annotation, we can use the inferred latent representation of regions based on p(o|g, r, x)
and build a classifier to categorize regions into object classes, when some training examples with
manually annotated objects are provided. Since collecting fully labeled images with annotated objects is difficult, upstream scene models are usually learned with partially labeled images for scene
categorization, where only scene categories are provided and objects are treated as latent topics
or themes [9]. In this paper, we focus on scene classification. Some empirical results on object
annotation will be reported when labeled objects are available.
We use this joint model as a running example to demonstrate the basic principe of performing maxmargin learning for the widely applied upstream scene understanding models because it is wellmotivated, very generic and covers many other existing scene understanding models. For example,
if we do not incorporate the global scene representation G, the joint model will be reduced to a
model similar as [14, 6, 23]. Moreover, the generic joint model provides a good framework for
studying the relative contributions of local object modeling and global scene representation, which
has been shown to be useful for scene classification [20] and object detection [17] tasks.
3
Weak Coupling of MLE in Learning Upstream Scene Models
To learn an upstream scene model, the most commonly used method is the maximum likelihood
estimation (MLE), such as in [23, 6, 14]. In this section, we discuss the weakness of MLE for
learning upstream scene models and motivate the max-margin approach.
Let D = {(Id , sd )}D
The standard MLE obtains
d=1 denote a set of partially labeled training images.
?D
the optimum model parameters by maximizing the log-likelihood1 d=1 log p(sd , rd , xd |gd , ?).
By using the factorization of p(s, ?, o,?
r, x|g, ?), MLE solves the following equivalent problem
max
? ?
?,???
d
(
)
log p(sd |gd , ?) + Lsd ,?? ,
(2)
where Lsd ,?? , log ? o p(?, o, rd , xd |sd , ?) = log p(rd , xd |sd , ?) is the log-likelihood of image features given the scene class, and ??? denotes all the parameters except ?.
Since Ls,?? does not depend on ?, the MLE estimation of the conditional scene model is to solve
max
?
?
d
log p(sd |gd , ?),
(3)
which does not depend on the latent object model. This is inconsistent with the prediction rule (1)
which does depend on both the conditional scene model (i.e., p(s|g, ?)) and the local object model.
1
The conditional likelihood estimation can avoid this problem to some extend, but it has not been studied,
to the best of our knowledge.
3
This decoupling will result in an imbalanced combination between the conditional scene and object
models for prediction, as we explain below.
We first present some details of the MLE method. For ?, the problem (3) is an MLE estimation of a
GLM, and it can be efficiently solved with gradient descent methods, such as quasi-Newton methods
[15]. For ??? , since the likelihood Ls,?? is intractable to compute, we apply variational methods
to obtain an approximation. By introducing a variational distribution qs (?, o) to approximate the
posterior p(?, o|s, r, x, ?) and using the Jensen?s inequality, we can derive a lower bound
Ls,?? ? Eqs [log p(?, o, r, x|s, ?)] + H(qs ) , L?? (qs , ?),
(4)
where H(q) = ?Eq [q] is the entropy. Then, the intractable prediction rule (1) can be approximated
with the variational prediction rule
(
)
s? , arg max log p(s|g, ?) + L?? (qs , ?) .
s,qs
?
(5)
Maximizing d L?? (qsd , ?) will lead to a closed form solution of ??? . See Appendix for the
inference of qs as involved in the prediction rule (5) and the estimation of ??? .
Now, we examine the effects of the conditional scene model p(s|g, ?) in making a prediction via the
prediction rule (5). Fig. 1 (b-left) shows the relative importance of log p(s|g, ?) in the joint decision
rule (5) on the sports dataset [13]. We can see that in MLE the conditional scene model plays a very
weak role in making a prediction when it is combined with the object model, i.e., L?? . Therefore,
as shown in Fig. 1 (c), although a simple logistic regression with global features (i.e., the green bar)
can achieve a good accuracy, the accuracy of the prediction rule (5) that uses the joint likelihood
bound (i.e, the red bar) is decreased due to the strong effect of the potentially bad prediction rule
based on L?? (i.e., the blue bar), which only considers local image features.
In contrast, as shown in Fig. 1 (b-right), in the max-margin approach to be presented, the conditional
scene model plays a much more influential role in making a prediction via the rule (5). This results in
a better balanced combination between the scene and the object models. The strong coupling is due
to solving an online loss-augmented SVM, as we explain below. Note that we are not claiming any
weakness of MLE in general. All our discussions are concentrated on learning upstream supervised
topic models, as generically represented by the model in Fig. 1.
4
Max-Margin Training
Now, we present the max-margin method for learning upstream scene understanding models.
4.1
Problem Definition
For the predictive rule (1), we use F (s, g, r, x; ?) , log p(s|g, r, x, ?) to denote the discriminant
function, which is more complicated than the commonly chosen linear form, in the sense we will
explain shortly. In the same spirit of max-margin classifiers (e.g., SVMs), we define the hinge loss
of the prediction rule (1) on D as
Rhinge (?) =
1 ?
max[??d (s) ? ?Fd (s; ?)],
s
D
d
where ??d (s) is a loss function (e.g., 0/1 loss), and ?Fd (s; ?) = F (sd , gd , rd , xd ; ?) ?
F (s, gd , rd , xd ; ?) is the margin favored by the true category sd over any other category s.
The problem with the above definition is that exactly computing the posterior distribution
p(s|g, r, x, ?) is intractable. As in MLE, we use a variational distribution qs to approximate it. By
using the Bayes?s rule and the variational bound in Eq. (4), we can lower bound the log-likelihood
log p(s|g, r, x, ?) = log p(s, r, x|g, ?) ? log p(r, x|g, ?) ? log p(s|g, ?) + L?? (qs , ?) ? c,
(6)
where c = log p(r, x|g, ?). Without causing ambiguity, we will use L?? (qs ) without ?. Since we
need to make some assumptions about qs , the equality in (6) usually does not hold. Therefore, the
tightest lower bound is an approximation of the intractable discriminant function
F (s, g, r, x; ?) ? log p(s|g, ?) + max L?? (qs ) ? c.
qs
(7)
Then, the margin is ?Fd (s; ?) = ?? ?fd (s) + maxqsd L?? (qsd ) ? maxqs L?? (qs ), of which the
linear term is the same as that in a linear SVM [7] and the difference between two variational bounds
causes the topic discovery to bias the learning of the scene classification model, as we shall see.
4
Using the variational discriminant function in Eq. (7) and applying the principle of regularized
empirical risk minimization, we define the max-margin learning of the joint scene and object model
as solving
?
min ?(?) + ?
?
d
(? max L?? (qsd )) + CRhinge (?),
qsd
(8)
where ?(?) is a regularizer of the parameters. Here, we define ?(?) , 21 ???22 . For the normal mean
?s or covariance matrix ?s , a similar ?2 -norm or Frobenius norm can be used without changing our
algorithm. The free parameters ? and C are positive and tradeoff the classification loss and the data
likelihood. When ? ? ?, the problem (8) reduces to the standard MLE of the joint scene model
with a fixed uniform prior on scene classes. Moreover, we can see the difference from the standard
MLE (2). Here, we minimize a hinge loss, which is defined on the joint prediction rule, while MLE
minimizes the log-likelihood loss log p(sd |gd , ?), which does not depend on the latent object model.
Therefore, our approach can be expected to achieve a closer dependence between the conditional
scene model and the latent object model. More insights will be provided in the next section.
4.2
Solving the Optimization Problem
The problem (8) is generally hard to solve because the model parameters and variational distributions are strongly coupled. Therefore, we develop a natural iterative procedure that estimates the
parameters ? and performs posterior inference alternatively. The intuition is that by fixing one part
(e.g., qs ) the other part (e.g., ?) can be efficiently done. Specifically, using the definitions, we
rewrite the problem (8) as a min-max optimization problem
min
max
?,{qsd } {s,qs }
(1
2
???22 ? (? + C)
?
d
L?? (qsd ) + C
)
?
[??? ?fd (s) + ??d (s) + L?? (qs )] ,
(9)
d
where the factor 1/D in Rhinge is absorbed in the constant C. This min-max problem can be
approximately solved with an iterative procedure. First, we infer the optimal variational posterior2
qs? = arg maxqs L?? (qs ) for each s and each training image. Then, we solve
min
?,{qsd }
(1
2
???22 ? (? + C)
?
d
L?? (qsd ) + C
?
d
)
max[??? ?fd (s) + ??d (s) + L?? (qs? )] ,
s
For this sub-step, again, we apply an alterative procedure to solve the minimization problem over ?
and qsd . We first infer the optimal variational posterior qs?d = arg maxqsd L?? (qsd ), and then we
estimate the parameters by solving the following problem
min
?
(1
2
???22 ? (? + C)
?
d
L?? (qs?d ) + C
?
d
)
max[??? ?fd (s) + ??d (s) + L?? (qs? )] ,
s
(10)
Since inferring qs?d is included in the step of inferring qs? (?s), the algorithm can be summarized
as a two-step EM-procedure that iteratively performs posterior inference of qs and max-margin
parameter estimation. Another way to understand this iterative procedure is from the definitions.
The first step of inferring qs? is to compute the discriminant function F under the current model.
Then, we update the model parameters ? by solving a large-margin learning problem. For brevity,
we present the parameter estimation only. The posterior inference is detailed in Appendix A.1.
Parameter Estimation: This step can be done with an alternating minimization procedure. For
the Gaussian parameters (?, ?) and multinomial parameters (?, ?), the estimation can be written
in a closed-form as in a standard MLE of CTMs [3] by using a loss-augmented prediction of s.
For brevity, we defer the details to the Appendix A.2. Now, we present the step of estimating ?,
which illustrates the essential difference between the large-margin approach and the standard MLE.
Specifically, the optimum solution of ? is obtained by solving the sub-problem3
min
?
?(
)
1
???22 + C
max[?? f (gd , s) + ??d (s) + L?? (qs? )] ? [?? f (gd , sd ) + L?? (qs?d )] ,
s
2
d
which is equivalent to a constrained problem by introducing a set of non-negative slack variables ?
D
min
?,?
?
1
???22 + C
?d s.t.: ?? ?fd (s) + [L?? (qs?d ) ? L?? (qs? )] ? ??d (s) ? ?d , ?d, s.
2
(11)
d=1
2
To retain an accurate large-margin criterion for estimating model parameters (especially ?), we do not
perform the maximization over s at this
? step.
3
The constant (w.r.t. ?) term ?C d L?? (qs?d ) is kept for easy explanation. It won?t change the estimation.
5
The constrained optimization problem is similar to that of a linear SVM [7]. However, the difference
is that we have the additional term
?L?d (s) , L?? (qs?d ) ? L?? (qs? ).
This term indicates that the estimation of the scene classification model is influenced by the topic
discovery procedure, which finds an optimum posterior distribution q ? . If ?L?d (s) < 0, s ?= sd ,
which means it is very likely that a wrong scene s explains the image content better than the true
scene sd , then the term ?L?d (s) acts in a role of augmenting the linear decision boundary ? to make
a correct prediction on this image by using the prediction rule (5). If ?L?d (s) > 0, which means
the true scene can explain the image content better than s, then the linear decision boundary can be
slightly relaxed. If we move the additional term to the right hand side, the problem (11) is to learn
a linear SVM, but with an online updated loss function ??d (s) ? ?L?d (s). We call this SVM an
online loss-augmented SVM. Solving the loss-augmented SVM will result in an amplified influence
of the scene classification model in the joint predictive rule (5) as shown in Fig. 1 (b).
5
Experiments
Now, we present empirical evaluation of our approach on the sports [13] and MIT indoor scene [20]
datasets. Our goal is to demonstrate the advantages of the max-margin method over the MLE for
learning upstream scene models with or without global features. Although the model in Fig. 1 can
also be used for object annotation, we report the performance on scene categorization only, which
is our main focus in this paper. For object annotation, which requires additional human annotated
examples of objects, some preliminary results are reported in the Appendix due to space limitation.
5.1
Datasets and Features
The sports data contain 1574 diverse scene images from 8 categories, as listed in Fig. 2 with example
images. The indoor scene dataset [20] contains 15620 scene images from 67 categories as listed in
Table 2. We use the method [1] to segment these images into small regions based on color, brightness and texture homogeneity. For each region, we extract color, texture and location features, and
quantize them into 30, 50 and 120 codewords, respectively. Similarly, the SIFT features extracted
from the small patches within each region are quantized into 300 SIFT codewords. We use the gist
features [19] as one example of global features. Extension to include other global features, such as
SIFT sparse codes [26], can be directly done without changing the model or the algorithm.
5.2
Models
For the upstream scene model as in Fig. 1, we compare the max-margin learning with the MLE
method, and we denote the scene models trained with max-margin training and MLE by MM-Scene
and MLE-Scene, respectively. For both methods, we evaluate the effectiveness of global features,
and we denote the scene models without global features by MM-Scene-NG and MLE-Scene-NG,
respectively. Since our main goal in this paper is to demonstrate the advantages of max-margin
learning in upstream supervised topic models, rather than dominance of such models over all others,
we just compare with one example of downstream models?the multi-class sLDA (Multi-sLDA)
[25]. Systematical comparison with other methods, including DiscLDA [12] and MedLDA [27], is
deferred to a full version. For the downstream Multi-sLDA, the image-wise scene category variable
S is generated from latent object variables O via a softmax function. For this downstream model,
the parameter estimation can be done with MLE as detailed in [25].
Finally, to show the usefulness of the object model in scene categorization, we also compare with the
margin-based multi-class SVM [7] and likelihood-based logistic regression for scene classification
based on the global features. For the SVM, we use the software SVMmulticlass 4 , which implements
a fast cutting-plane algorithm [11] to do parameter learning. We use the same software with slight
changes to learn the loss-augmented SVM in our max-margin method.
5.3
Scene Categorization on the 8-Class Sports Dataset
We partition the dataset equally into training and testing data. For all the models except SVM and
logistic regression, we run 5 times with random initialization of the topic parameters (e.g., ? and ?).
4
http://svmlight.joachims.org/svm multiclass.html
6
badminton
badminton
croquet
rockclimbing
rockclimbing snowboarding
bocce
bocce
croquet
rockclimbing
croquet
polo
polo
badminton
sailing
sailing
rowing
snowboarding
bocce
rowing
rowing
polo
sailing
snowboarding
Figure 2: Example images from each category in the sports dataset with predicted scene classes, where the
predictions in blue are correct while red ones are wrong predictions.
0.75
Scene Classification Accuracy
The average overall accuracy of scene categorization
0.7
on 8 categories and its standard deviation are shown in
0.65
0.6
Fig. 3. The result of logistic regression is shown in the
0.55
left green bar in Fig. 1 (c). We also show the confusion
0.5
matrix of the max-margin scene model with 100 latent
0.45
topics in Table 1, and example images from each cat0.4
egory are shown in Fig. 2 with predicted labels. Over0.35
all, the max-margin scene model with global features
0.3
achieves significant improvements as compared to all
0.25
10
20
30
40
50
60
70
80
90
100
other approaches we have tested. Interestingly, al# Topics
though we provide only scene categories as supervised
Figure 3: Classification accuracy of different
information during training, our best performance with models with respect to the number of topics.
global features is close to that reported in [13], where
additional supervision of objects is used. The outstanding performance of the max-margin method
for scene classification can be understood from the following aspects.
MM?Scene
MM?Scene?NG
MLE?Scene
MLE?Scene?NG
Multi?sLDA
Multi?SVM
Max-margin training: from the comparison of the max-margin approach with the standard MLE
in both cases of using global features and not using global features, we can see that the max-margin
learning can improve the performance dramatically, especially when the scene model uses global
features (about 3 percent). This is due to the well-balanced prediction rule achieved by the maxmargin method, as we have explained in Section 3.
Global features: from the comparison between the scene models with and without global features,
we can see that using the gist features can significantly (about 8 percent) improve the scene categorization accuracy in both MLE and max-margin training. We also did some preliminary experiments
on the SIFT sparse codes feature [26], which are a bit more expensive to extract. By using both gist
and sparse codes features, we can achieve dramatic improvements in both max-margin and MLE
methods. Specifically, the max-margin scene model achieves an accuracy of about 0.83 in scene
classification, and the likelihood-based model obtains an accuracy of about 0.80.
Object modeling: the superior performance of the max-margin learned MM-scene model comparing
to the SVM and logistic regression (See the left green bar of Fig. 1 (c)), which use global features
only, indicates that modeling objects can facilitate scene categorization. This is because the scene
classification model is influenced by the latent object modeling through the term ?L?d (s), which can
improve the decision boundary of a standard linear SVM for those images that have negative scores
of ?L?d (s), as we have discussed in the online loss-augmented SVM. However, object modeling
does not improve the classification accuracy and sometimes it can even be harmful when the scene
model is learned with the standard MLE. This is because the object model (using the state-of-the-art
representation) (e.g., MM-MLE-NG) alone performs much worse than global feature models (e.g.,
logistic regression), as shown in Fig. 1 and Fig. 3, and the standard MLE learns an imbalanced
prediction rule, as we have analyzed in Section 3. Given that the state-of-the-art object model is not
good, it is very encouraging to see that we can still obtain positive improvements by using the closely
coupled and well-balanced max-margin learning. These results indicate that further improvements
can be expected by improving the local object model, e.g., by incorporating rich features.
We also compare with the theme model [9], which is for scene categorization only. The theme model
uses a different image representation, where each image is a vector of image patch codewords. The
theme model achieves about 0.65 in classification accuracy, lower than that of MM-Scene.
7
Table 1: Confusion matrix for 100-topic MMScene on the sports dataset.
badminrocksnow0.717
bocce croquet polo
rowing sailing
ton
climbing
boarding
badminton 0.768 0.051 0.051 0.081 0.020 0.020 0.000 0.010
bocce
0.043 0.333 0.275 0.145 0.087 0.058 0.014 0.043
croquet
0.025 0.144 0.669 0.093 0.025 0.025 0.008 0.008
polo
0.220 0.055 0.099 0.516 0.022 0.022 0.011 0.055
rockclimbing 0.000 0.010 0.021 0.000 0.845 0.031 0.010 0.082
rowing
0.008 0.008 0.008 0.008 0.024 0.912 0.016 0.016
sailing
0.011 0.021 0.000 0.021 0.011 0.053 0.884 0.000
snowboarding 0.011 0.021 0.032 0.095 0.084 0.053 0.063 0.642
Table 2: The 67 indoor categories sorted by classification
accuracy by 70-topic MM-Scene.
buffet 0.85
green house 0.84
cloister 0.71
inside bus 0.61
movie theater 0.60
poolinside 0.59
church inside 0.56
classroom 0.55
concert hall 0.55
corridor 0.55
florist 0.55
trainstation 0.54
closet 0.51
elevator 0.49
nursery 0.44
bowling 0.41
gameroom 0.40
lobby 0.40
stairscase 0.25
hospitalroom 0.10
prison cell 0.39
studiomusic 0.24
kindergarden 0.10
casino 0.36
children room 0.21
laundromat 0.10
dining room 0.35
garage 0.20
office 0.10
kitchen 0.35
gym 0.20
restaurant kitchen 0.09
winecellar 0.34
hairsalon 0.20
shoeshop 0.09
library 0.31
livingroom 0.20
videostore 0.08
tv studio 0.30
operating room 0.20
airport inside 0.07
warehouse 0.29
pantry 0.20
bar 0.06
batchroom 0.26
subway 0.20
deli 0.06
bookstore 0.25
toystore 0.19
jewelleryshop 0.06
computerroom 0.25
artstudio 0.14
laboratorywet 0.05
dentaloffice 0.25 fastfood restaurant 0.13
locker room 0.05
grocerystore 0.25
auditorium 0.12
museum 0.05
inside subway 0.25
bakery 0.11
restaurant 0.05
mall 0.25
bedroom 0.11
waitingroom 0.04
meeting room 0.25
clothingstore 0.10
Scene Classification Accuracy
0.8
Finally, we examine the influence of the loss function
0/1
0/5
??d (s) on the performance of the max-margin scene
0.75
0/10
0/20
model. As we can see in problem (11), the loss function
0.7
0/30
??d (s) is another important factor that influences the
0/40
0/50
0.65
estimation of ? and its relative importance in the predic0.6
tion rule (5). Here, we use the 0/?-loss function, that is,
??d (s) = ? if s ?= sd ; otherwise 0. Fig. 4 shows the
0.55
performance of the 100-topic MM-Scene model when
0.5
using different loss functions. When ? is set between 10
Figure 4: Classification accuracy of MMand 20, the MM-Scene method stably achieves the best Scene with different loss functions ?? (s).
d
performance. The above results in Fig. 3 and Table 1
are achieved with ? selected from 5 to 40 with cross-validation during training.
5.4 Scene Categorization on the 67-Class MIT Indoor Scene Dataset
Scene Classification Accuracy
The MIT indoor dataset [20] contains complex scene
0.34
MLE?Scene?NG
images from 67 categories. We use the same training
MM?Scene?NG
0.32
SVM
and testing dataset as in [20], in which each category
0.3
LR
ROI+Gist(segmentation)
has about 80 images for training and about 20 images
0.28
ROI+Gist(annotation)
0.26
for testing. We compare the joint scene model with
MLE?Scene
MM?Scene
0.24
SVM, logistic regression (LR), and the prototype-based
0.22
methods [20]. Both the SVM and LR are based on the
0.2
global gist features only. For the joint scene model,
0.18
we set the number of latent topics at 70. The overall
0.16
0.14
performance of different methods are shown in Fig. 5
and the classification accuracy of each class is shown Figure 5: Classification accuracy on the 67in Table 2. For the prototype-based methods, we cite class MIT indoor dataset.
the results from [20]. We can see that the joint scene
model (both MLE-Scene and MM-Scene) significantly outperforms SVM and LR that use global
features only. The likelihood-based MLE-Scene slightly outperforms the ROI-Gist(segmentation),
which uses both the global gist features and local region-of-interest (ROI) features extracted from
automatically segmented regions [20]. By using max-margin training, the joint scene model (i.e.,
MM-Scene) achieves significant improvements compared to MLE-Scene. Moreover, the marginbased MM-Scene, which uses automatically segmented regions to extract features, outperforms the
ROI-Gist(annotation) method that uses human annotated interested regions.
6
Conclusions
In this paper, we address the weak coupling problem of the commonly used maximum likelihood
estimation in learning upstream scene understanding models by presenting a joint maximum margin and maximum likelihood learning method. The proposed approach achieves a close interplay
between the prediction model estimation and latent topic discovery, and thereby a well-balanced
prediction rule. The optimization problem is efficiently solved with a variational EM procedure,
which iteratively learns an online loss-augmented SVM. Finally, we demonstrate the advantages of
max-margin training and the effectiveness of using global features in scene understanding on both
an 8-category sports dataset and the 67-class MIT indoor scene data.
8
Acknowledgements
J.Z and E.P.X are supported by ONR N000140910758, NSF IIS-0713379, NSF Career DBI0546594, and an Alfred P. Sloan Research Fellowship to E.P.X. L.F-F is partially supported by
an NSF CAREER grant (IIS-0845230), a Google research award, and a Microsoft Research Fellowship. We also would like to thank Olga Russakovsky for helpful comments.
References
[1] P. Arbel?aez and L. Cohen. Constrained image segmentation from hierarchical boundaries. In CVPR,
2008.
[2] I. Biederman. On the semantics of a glance at a scene. Perceptual Organization, 213?253, 1981.
[3] D. Blei and J. Lafferty. Correlated topic models. In NIPS, 2006.
[4] D. Blei and J.D. McAuliffe. Supervised topic models. In NIPS, 2007.
[5] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, (3):993?1022, 2003.
[6] L.-L. Cao and L. Fei-Fei. Spatially coherent latent topic model for concurrent segmentation and classification of objects and scenes. In ICCV, 2007.
[7] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. JMLR, (2):265?292, 2001.
[8] L. Du, L. Ren, D. Dunson, and L. Carin. A bayesian model for simultaneous image cluster, annotation
and object segmentation. In NIPS, 2009.
[9] L. Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories. In CVPR,
2005.
[10] A. Friedman. Framing pictures: The role of knowledge in automatized encoding and memory for gist.
Journal of Experimental Psychology: General, 108(3):316?355, 1979.
[11] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural SVMs. Machine Learning,
77(1):27?59, 2009.
[12] S. Lacoste-Jullien, F. Sha, and M. Jordan. DiscLDA: Discriminative learning for dimensionality reduction
and classification. In NIPS, 2008.
[13] L.-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. In
CVPR, 2007.
[14] L.-J. Li, R. Socher, and L. Fei-Fei. Towards total scene understanding: Classification, annotation and
segmentation in an automatic framework. In CVPR, 2009.
[15] D.C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, (45):503?528, 1989.
[16] D.G. Lowe. Object recognition from local scale-invariant features. In ICCV, 1999.
[17] K. Murphy, A. Torralba, and W. Freeman. Using the forest to see the trees: A graphical model relating
features, objects, and scenes. In NIPS, 2003.
[18] D. Navon. Forest before trees: The precedence of global features in visual perception. Perception and
Psychophysics, 5:197?200, 1969.
[19] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42(3):145?175, 2001.
[20] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009.
[21] B. Sch?
olkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001.
[22] J. Sivic, B.C. Russell, A. Efros, A. Zisserman, and W.T. Freeman. Discovering objects and their locatioins
in images. In ICCV, 2005.
[23] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Learning hierarchical models of scenes, objects,
and parts. In CVPR, 2005.
[24] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003.
[25] C. Wang, D. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. In CVPR, 2009.
[26] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding forimage
classification. In CVPR, 2009.
[27] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: Maximum margin supervised topic models for regression
and classification. In ICML, 2009.
9
| 4149 |@word version:1 norm:2 covariance:1 brightness:1 dramatic:1 thereby:1 reduction:1 liu:1 contains:2 score:1 interestingly:1 outperforms:3 existing:3 current:1 comparing:1 written:1 partition:2 shape:1 designed:1 gist:10 update:1 concert:1 alone:1 generative:1 discovering:2 selected:1 plane:2 lr:4 blei:4 iterates:1 provides:1 contribute:1 location:3 quantized:1 org:1 mathematical:1 xnm:2 corridor:1 warehouse:1 ijcv:1 inside:4 expected:2 rapid:1 examine:2 prison:1 multi:8 freeman:3 automatically:2 encouraging:1 increasing:1 provided:3 discover:1 moreover:3 estimating:2 what:1 minimizes:1 developed:2 collecting:1 act:1 xd:5 exactly:1 classifier:3 wrong:2 bocce:5 grant:1 arguably:1 mcauliffe:1 before:2 positive:2 understood:1 local:9 sd:13 encoding:1 lsd:2 id:1 approximately:1 initialization:1 studied:1 factorization:1 limited:1 snowboarding:4 testing:3 implement:1 procedure:10 empirical:5 significantly:3 matching:1 close:3 egory:1 risk:1 applying:1 influence:3 equivalent:2 maximizing:2 l:3 rule:22 q:33 insight:1 theater:1 badminton:4 updated:1 hierarchy:1 play:2 user:1 programming:1 us:6 pa:1 recognition:5 approximated:1 expensive:1 labeled:4 role:4 taskar:1 solved:4 capture:2 wang:1 region:16 russell:1 balanced:6 intuition:1 n000140910758:1 motivate:1 depend:4 solving:7 segment:2 rewrite:1 trained:1 predictive:3 eric:1 joint:22 various:1 represented:5 regularizer:1 fast:1 effective:1 describe:1 aez:1 mon:1 slda:5 stanford:3 widely:3 valued:1 solve:4 garage:1 otherwise:1 cvpr:8 ability:1 statistic:1 online:7 interplay:2 advantage:6 arbel:1 dining:1 causing:1 cao:1 holistic:1 mixing:1 achieve:4 amplified:1 frobenius:1 olkopf:1 cluster:1 optimum:3 r1:1 categorization:13 generating:1 object:52 coupling:3 derive:1 develop:1 augmenting:1 gong:1 fixing:1 school:1 advocated:1 eq:4 strong:2 solves:3 c:2 involves:1 predicted:2 indicate:1 closely:3 annotated:4 correct:2 human:4 explains:1 require:1 preliminary:2 feifeili:1 extension:2 precedence:1 hold:1 mm:15 hall:1 normal:2 exp:4 great:1 roi:5 bakery:1 algorithmic:1 efros:1 achieves:6 torralba:4 estimation:22 label:1 concurrent:1 minimization:3 mit:8 gaussian:1 rather:1 avoid:1 closet:1 office:1 focus:2 joachim:2 improvement:5 likelihood:19 mainly:1 indicates:2 contrast:2 sense:1 posteriori:1 inference:5 helpful:1 perona:1 koller:1 quasi:1 interested:1 semantics:1 arg:5 classification:35 html:1 overall:2 favored:1 constrained:3 softmax:1 art:2 mutual:1 airport:1 psychophysics:1 spatial:2 ng:8 manually:1 yu:2 unsupervised:1 carin:1 icml:1 report:1 others:1 museum:1 homogeneity:1 individual:1 elevator:1 murphy:1 kitchen:2 delicate:1 microsoft:1 friedman:2 detection:1 organization:1 interest:1 fd:8 evaluation:1 weakness:4 generically:1 mixture:1 deferred:1 analyzed:1 accurate:2 tuple:1 closer:1 tree:2 laundromat:1 harmful:1 instance:1 modeling:6 cover:1 assignment:2 maximization:1 cost:1 introducing:2 deviation:1 uniform:1 usefulness:1 recognizing:2 reported:3 gd:8 combined:1 retain:1 probabilistic:2 again:1 ambiguity:1 huang:1 necessitate:1 worse:1 li:5 bfgs:1 sec:5 summarized:1 availability:1 casino:1 coding:1 junzhu:1 sloan:1 depends:1 tion:1 view:1 lowe:1 closed:2 red:3 xing:2 bayes:1 complicated:2 annotation:13 defer:1 jia:1 ctms:2 contribution:1 minimize:1 accuracy:19 conducting:1 efficiently:4 characteristic:1 likewise:1 who:1 climbing:1 rhinge:2 weak:3 bayesian:2 ren:1 lijiali:1 researcher:2 russakovsky:1 explain:5 simultaneous:2 influenced:2 quattoni:1 definition:4 involved:1 couple:1 dataset:14 automatized:1 color:4 knowledge:2 dimensionality:1 segmentation:6 classroom:1 supervised:16 response:2 zisserman:1 done:4 though:1 strongly:1 just:1 smola:1 correlation:2 hand:1 google:1 glance:1 defines:1 logistic:7 stably:1 lda:2 facilitate:1 effect:2 concept:1 true:4 contain:1 equality:1 regularization:1 alternating:1 spatially:1 iteratively:3 semantic:2 during:3 bowling:1 won:1 m:1 generalized:1 criterion:1 presenting:1 demonstrate:7 confusion:2 performs:3 percent:2 image:41 variational:13 wise:1 recently:2 common:2 superior:1 multinomial:1 sailing:5 cohen:1 insensitive:1 extend:1 slight:1 discussed:1 relating:1 mellon:1 composition:1 significant:2 rd:5 automatic:1 similarly:1 pointed:1 rowing:5 supervision:4 operating:1 etc:1 posterior:10 imbalanced:4 showed:1 navon:2 systematical:1 inequality:1 onr:1 meeting:1 guestrin:1 additional:4 relaxed:1 mr:3 ii:2 full:1 infer:3 reduces:1 segmented:2 ahmed:1 offer:1 cross:1 mle:45 equally:1 award:1 prediction:37 basic:1 regression:8 oliva:1 vision:3 cmu:1 croquet:5 represent:3 sometimes:1 kernel:2 pyramid:1 achieved:3 cell:1 fellowship:2 objection:1 decreased:1 sudderth:1 sch:1 extra:1 rest:1 envelope:1 comment:1 inconsistent:1 spirit:1 effectiveness:2 lafferty:1 call:1 jordan:2 structural:1 yang:1 svmlight:1 easy:2 restaurant:3 psychology:2 bedroom:1 prototype:2 tradeoff:1 multiclass:2 motivated:1 cause:1 dramatically:1 useful:1 generally:1 detailed:4 listed:2 concentrated:1 svms:2 category:21 reduced:1 generate:1 http:1 nsf:3 deli:1 blue:3 diverse:1 nursery:1 carnegie:1 alfred:1 promise:1 medlda:4 shall:1 group:2 dominance:1 subway:2 rnm:2 changing:2 kept:1 lacoste:1 nocedal:1 downstream:6 run:1 parameterized:1 patch:6 decision:4 appendix:4 disclda:2 bit:1 bound:6 constraint:1 fei:12 scene:150 software:2 tag:1 aspect:1 qsd:10 min:8 performing:1 department:1 structured:2 influential:1 tv:1 combination:2 marginbased:1 slightly:2 em:3 making:3 s1:1 maxmargin:2 explained:1 iccv:3 invariant:1 glm:2 bus:1 discus:2 slack:1 singer:1 studying:1 available:2 tightest:1 apply:2 hierarchical:3 indirectly:1 generic:6 gym:1 buffet:1 shortly:1 top:1 running:2 dirichlet:2 denotes:2 include:1 graphical:1 hinge:2 newton:1 build:1 especially:2 move:1 codewords:4 sha:1 dependence:1 gradient:1 mx:3 thank:1 polo:5 topic:37 considers:1 discriminant:5 willsky:1 code:3 modeled:1 ratio:2 difficult:1 dunson:1 potentially:2 claiming:1 locker:1 negative:2 implementation:1 unknown:1 perform:2 datasets:3 sm:1 markov:1 finite:1 descent:1 rn:1 biederman:2 inferred:1 rating:1 namely:1 sivic:1 coherent:1 learned:3 framing:1 nip:6 address:2 beyond:1 bar:7 usually:2 below:2 perception:2 indoor:10 max:59 green:5 explanation:1 including:1 mall:1 memory:2 critical:1 event:1 natural:3 treated:1 regularized:1 zhu:2 scheme:2 improve:5 movie:1 epxing:1 library:1 picture:1 concludes:1 church:1 jun:1 coupled:3 extract:5 finley:1 prior:2 understanding:14 discovery:7 acknowledgement:1 relative:3 loss:21 fully:1 limitation:1 allocation:2 validation:1 sufficient:1 principle:2 classifying:1 supported:2 free:3 side:3 bias:1 understand:1 taking:1 sparse:4 boundary:4 rich:1 ignores:1 commonly:4 approximate:2 obtains:2 preferred:1 cutting:2 lobby:1 global:29 pittsburgh:1 auditorium:1 discriminative:2 alternatively:1 latent:22 iterative:3 table:6 learn:4 ca:1 decoupling:1 career:2 improving:1 forest:2 quantize:1 du:1 upstream:22 complex:2 did:1 main:2 fastfood:1 child:1 augmented:9 fig:18 sub:3 theme:4 inferring:3 house:1 perceptual:1 jmlr:2 learns:2 bookstore:1 bad:1 sift:5 jensen:1 ton:1 svm:24 intractable:5 essential:1 incorporating:1 socher:1 importance:2 texture:4 illumination:1 illustrates:1 studio:1 margin:54 entropy:1 led:1 explore:1 likely:2 absorbed:1 visual:3 sport:9 partially:3 corresponds:1 cite:1 extracted:2 conditional:9 goal:3 sorted:2 towards:1 room:5 content:3 change:3 hard:1 included:1 specifically:4 except:2 olga:1 called:1 total:1 experimental:1 principe:1 people:1 support:1 crammer:1 categorize:1 brevity:2 outstanding:1 incorporate:1 evaluate:1 tested:1 problem3:1 correlated:2 |
3,478 | 415 | Oriented Non-Radial Basis Functions for Image
Coding and Analysis
Avijit Saha 1
Jim Christian
D. S. Tang
Microelectronics and Computer Technology Corporation
3500 West Balcones Center Drive
Austin, TX 78759
Chuan-Lin Wu
Department of Electrical and Computer Engineering
University of Texas at Austin,
Austin, TX 78712
ABSTRACT
We introduce oriented non-radial basis function networks (ONRBF)
as a generalization of Radial Basis Function networks (RBF)- wherein
the Euclidean distance metric in the exponent of the Gaussian is replaced by a more general polynomial. This permits the definition of
more general regions and in particular- hyper-ellipses with orientations. In the case of hyper-surface estimation this scheme requires a
smaller number of hidden units and alleviates the "curse of dimensionality" associated kernel type approximators.In the case of an image, the hidden units correspond to features in the image and the
parameters associated with each unit correspond to the rotation, scaling and translation properties of that particular "feature". In the context of the ONBF scheme, this means that an image can be
represented by a small number of features. Since, transformation of an
image by rotation, scaling and translation correspond to identical
transformations of the individual features, the ONBF scheme can be
used to considerable advantage for the purposes of image recognition
and analysis.
1 INTRODUCTION
Most, "neural network" or "connectionist" models have evolved primarily as adaptive
function approximators. Given a set of input-output pairs <x,y> (x from an underlying
function f (Le. y = f(x?, a feed forward, time-independent neural network estimates a
1. Alternate address: Dept. of EeE, Univ. of Texas at Austin, Austin, TX 78712
728
Oriented Non-Radial Basis Functions for Image Coding and Analysis
function y' = g(p,x) such that E= p(y - y') is arbitrarily small over all <x,y> pairs. Here, p
is the set of parameters associated with the network model and p is a metric that measures
the quality of approximation, usually the Euclidean nonn. In this paper, we shall restrict
our discussion to approximation of real valued functions of the fonn f:R n -> R. For a network of fixed structure (determined by g), all or part of the constituent parameter set p,
that minimize E are determined adaptively by modifying the set of parameters. The problem of approximation or hypersurface reconstruction is then one of determining what class
of g to use, and then the choice of a suitable algorithm for determining the parameters pgiven a set of samples {<x,y> }.By far the most popular method for determining network
parameters has been the gradient descent method. If the error surface is quadratic or convex, gradient descent methods will yield an optimal value for the network parameters.However, the burning problem in still remains the determination of network. parameters
when the error function is infested with local minimas. One way of obviating the problem
of local minimas is to match a network architecture with an objective function such that
the error surface is free of local minimas. However, this might limit the power of the network architecture such as in the case of linear perceptrons[1). Another approach is to obtain algebraic transfonnations of the objective functions such that algorithms can be
readily designed around the transfonned functions to avoid local minimas. Random optimization method of Matyas and its variations have been studied recently [2). as alternate
avenues for determining the parameter set p. Perhaps the most probable reason for the BP
algorithms popularity is that the error surface is relatively smooth [1),[3)
The problem of local minimas is circumvented somewhat differently in local or kernel
type estimators. The input space in such a method is partitioned into a number of local regions and if the number of regions defined is sufficiently large, then the output response in
each local region is sufficiently unifonn or smooth and the error will remain bounded i.e. a
local minima will be close to the global minima. The problem with kernel type of estimators is that the number of "bins", "kernels" or "regions" that need to be defined increases
exponentially with the dimension of the input space. An improvement such as the one considered by [4) is to define the kernels only in regions of the input space where there is data.
However, our experiments indicate that even this may not be sufficient to lift the curse of
dimensionality. If instead of limiting the shape of the kernels to be boxes or even hyperspheres we select the kernels to be shapes defined by a second order polynomials then a
larger class of shapes or regions can be defined resulting in significant reductions in the
number of kernels required. This was the principal motivation behind our generalization
of ordinary RBF networks. Also, we have determined that radial basis function networks
will, given sufficiently large widths, linearize the output response between two hidden
units. This gives rise to hyperacuity or coarse coding, whereby a high resolution of stimuli
can be observed at the signal level despite poor resolution in the sensor array. In the context of function approximation this means that if the hyper-surface being approximated
varies linearly in a certain region, the output behavior can be captured by suitably placing
a single widely tuned receptive field in that region. Therefore, it is advantageous to choose
the regions with proper knowledge of the output response in that region as opposed to
choosing the bins based on the inputs alone. These were some of the principal motivations
for our generalization.
In addition to the architectural and learning issues, we have been concerned with approximation schemes in which the optimal parameter values have readily interpretable forms
that may allow other useful processing elsewhere. In the following section we present
ONBF as a generalization to RBF [4) and GRBF [5). We show how rotation, scaling and
729
730
Saba, Christian, 'Thng, and Wu
translation (center) infonnation of these regions can be readily extracted from the parameter values associated with each hidden unit. In subsequent sections we present experimental results illustrating the perfonnance of ONRBF as a function approximator and
feasibility of ONRBF for the purposes of image coding and analysis.
2 ORIENTED NON-RADIAL BASIS FUNCTION NETWORKS
Radial Basis Function networks can be described by the fonnula:
k
f(x) =
L
a=O
waRa(x)
where f(x) is the output of the network, k is the number of hidden units, wa is the weight
associated with hidden unit a, and Ra(x) is the response of unit a, The response Ra(x) of
unit a is given by
_("C:~XI)2
R a =e
Poggio and Girosi [5) have considered the generalization where a different width parameter 0?i is associated with each input dimension i. The response function Ra is then defined
as
d (C a .-X )2
-L
R (x)
a
=e
?
1=\
i
_I-
(J
ai
Now each 0?i can influence the response of the ath unit and the effect is that widths associated with irrelevant or correlated inputs will tend to be increased. It has been shown that if
one of the input components has a random input and a constant width (constant for that
particular dimension) is used for each receptive field, then the width for that particular receptive field is maximum [6).
The generalization we consider in this paper is a further shaping of the response Ra by
composing it with a rotation function Sa designed to rotate the unit about its center in dspace, where d is the input dimension. This composition can be represented compactly by
a response function of the fonn:
_II Ma[x \, ....x d ,I] 112
R a =e
where Ma is a d by d+ 1 matrix. The matrix transfonns the input vectors and these transformations correspond to translation (center infonnation), scaling and rotation of the input
vectors. The response function presented above is the restricted fonn of a more general response function of the fonn:
R
a
= e -[P(x)]
where the exponent is a general polynomial in the input variables. In the following sections we present the learning rules and we show how center, rotation and scaling information can be extracted from the matrix elements. We do this for the case when the input
dimension is 2 (as is the case for 2-dimensional images) but the results are generalized
easily.
Oriented Non-Radial Basis Functions for Image Coding and Analysis
2.1 LEARNING RULES
Consider the n-dimensional case where <x 1.... Xn> represents the input vector and 1Ila.J'k
represents the matrix element of the jib row and klb column of the matrix Mil associated
with the alb unit Then the response of the alb unit is given by:
Ra. (x.y)
-(i (~i1 m_x.))
=e ~-1
. -1
11J1
1
The total sum square error over b patterns is given by:
2
TE = r rr(x p) -F (xp) ]
pL
= rEp
p
Then the derivative of the error due to the
II1ay of the alb unit is given by:
:?ar!
~Ib
= rL~
P
pattern with respect to the matrix element
(E p)=2[f(X p)-F(X p)]a: r=2[L p]a: r
11..
~
11..
~
11 ??
~
and:
where.
mw : is the ith row of the matrix corresponding to the a th unit
xp : is the input vector
is the jth variable in the input space.
Xj :
Then the update rule for the matrix elements with learning rate T) is given by:
t+I
m 11..
ij
a
= m t11 ?? -n--(En)
?Iam
...
u
~.
IJ
and the learning rule for the weights wa. is given by:
2.2 EXTRACTING ROTATION, SCALE AND CENTER VALUES
In this section we present the equations for extracting the rotation. translation and scaling values (widths) of the a th receptive field from its associated matrix elements. We
present these for the special case when n the input dimension is equal to 2. since that is
the case for images. The input vector x is represented by <x.y> and the rules for converting the matrix elements into center. scaling and rotation infonnation is as follows:
?
center (Xo,Yo)
731
732
Saha, Christian, Tcmg, and Wu
where,
?
rotation (8)
?
scaling or receptive field widths or sigmas
1
2
2
2
2
m 12 m ll +m 22 m 21
1
d 1 = -2 (m ll +m 21 +m I2 +m 22 )+
.
== r::
sm2
..;2 a
e
I
2.3 HIERARCHICAL CLUSTERING
We use a multi-resolution, hierarchical approach to detennine where to place hidden units
to maximize the accuracy of approximation and to locate image features. For illustration,
we consider our method in the context of image processing, though the idea will work for
any type of function approximation problem. The process begins with a small number of
widely tuned receptive field units. The widths are made high my multiplying the value obtained from the nearest neighbor-heuristic by a large overlap parameter. The large widths
force the units to excessively smooth the image being approximated. Then, errors will be
Observed in regions where detailed features occur. Those pixels for which high error (say,
greater than one standard deviation from the mean) occurred are collected and new units
are added in locations chosen randomly from this set. The entire process can be repeated
until a desired level of accuracy is reached. Notice that, when the network is finally
trained, the top levels in the hierarchy provide global information about the image under
consideration. This scheme is slightly different than the one presented in [7], where units
in each resolution learn the error observed in the previous resolution-- in our method, after
the addition of the new units all the units learn the original function as opposed to the
some error function.
3 RESULTS
3.1 ONRBF AS AN APPROXIMATOR
Oriented non-radial basis function networks allow the definition of larger regions or receptive fields- this is due to the fact that rotation, along with the elliptical hyper-spheres as opposed to mere spheres, pennits the grouping of more nearby points into a single region.
Oriented Non-Radial Basis Functions for Image Coding and Analysis
Therefore, the approximation accuracy of such a network can be quite good with even a
small number of units. For instance, Table 1 compares ordinary radial basis function networks with oriented non-radial basis function networks in tenns of the number of units required to achieve various levels of accuracy. The function approximated is the MackeyGlass differential delay equation:
dx l
-
dt
X l- T
= -bx +a--l
1 +x l- T
TABLE 1. Normalized approximation error for radial and non-radial basis functions
10 unIt.
20 unit.
60 unit.
10 UD;h
160 unIt.
320 unit.
~oo
unit.
RBF'lY&ID
ONBF Tr&ID
RBF 'rut 1
ONBF Tnt 1
RBF Trot 2
ONBF Tnt 2
.626
.377
.236
.107
.220
.110
.267
.167
.161
.071
.622
.607
.310
.271
.228
.20'1
.208
.208
.166
.1~O
.10'1
.061
.0~7
.13'
. 123
.126
.131
.121
.06~
.10~
The series used was generated with t = 17, a = 0.1 and b = 0.2. A series of 500 consecutive
points was used for training, and the next two sets of 500 points were used for cross-validation. The training vector at time t is the tuple (xl.xl-6,x l-12.xt-18,x l+8S), where the first
four components fonn the input vector and the last fOnTIS the target, and Xl is the value of
the series at time t. Table 1 lists the nonnalized error for each experiment- that is, the root
mean square prediction error divided by the standard deviation of the data series. Oriented
non-radial basis function networks yield higher accuracy than do radial basis function networks with the same number of units. In addition, ONRBF nets were found to generalize
better.
3.1 IMAGE CODING AND ANALYSIS
For images each hidden unit corresponds some feature in the input space. This implies that
there is some invariant property associated with the region spanned by the receptive field.
For bitmaps this property could be the probability density function (ignoring higher order
statistics) and a feature is a region over which the probability density function remains the
same. For grey level images, instead of the linear weight this property could be described
by a low order polynomial. We have found that when the parameters of an image function
are detennined adaptively using the learning rules in section 2.1-- the receptive fields organize themselves so as to capture features in the input space. This is illustrated in Figure
1, where the input image is a bitmap for a set of Chinese characters. The property of a feature in this case is the value of the pixel (0 or 1) in the coordinate location specified by the
input- and therefore a linear tenn (for the weight) as used in section 2.1 is sufficient. Figure 1.a is the input bitmap image and figure 1.b shows the plot of the regions of influence
of the individual receptive fields. Notice that the individual receptive fields tend to become
"responsible" for entire strokes of the character.
We would like to point out that if the initial positions of the hidden units are chosen randomly, then with each new start of the approximation process a single feature may be represented by a collection of hidden units in many different manners- and the task of
733
734
Saha, Christian, Tang, and Wu
Figurel.a: Bitmap Of
Chinese Character Which
Is The Input Image
Figure l.b: Plot Of Regions Of Influence
Of Receptive Fields After Training
recognition becomes difficult. Therefore, for consistent approximation, a node deletion or
region growing algorithm is needed. Such an algorithm has been developed and will be
presented elsewhere. If with every approximation of the same image, we get the same features (parameters for the hidden units), then images under rotation and scaling can also be
recognized easily-- since there will be a constant scaling and rotational change in all the
hidden units.
4
CONCLUSIONS
We have presented a generalization of RBF networks that allows interpretation of the parameter values associated with the hidden units and perfonns better as a function approximator. The number of parameters associated with each hidden units grow quickly with the
input dimension (O(d2 However, the number of hidden units required is significantly
lower if the function is relatively smooth. Alternatively, one can compose the Gaussian response of the original RBF by using a suitable clipping function in which the number of
associated parameters grow linearly with the input dimension d. For images, the input dimension is 2 and the number of parameters associated with each hidden unit is 6 as opposed to 5- when the multidimensional Gaussian is represented by the superposition of 1dimensional Gaussians, and 4 with RBF networks.
?.
References
[1]
Widrow, Bernard and Michael A. Lehr,"30 Years of Adaptive Neural Networks:
Perceptron, Madaline, and Backpropagation", Proc. of the IEEE, vo1.78, No.9, Sept
1990, pp 1415-1442.
[2]
Baba, Norio,"A New Approach for Finding the Global Minimum ofEITor Function
of Neural Networks", Neural Networks, Vol. 2, pp 367-373, 1989.
[3]
Baldi, Pierre and Kurt Homik,"Neural Networks and Principal Component
Analysis: Learning from Examples Without Local Minima", Neural Networks, Vol.
2,pp 53-58, 1989.
[4]
Moody, John and Darken, Christen, "Learning with Localized Receptive Fields",
Proc. of the 1988 Connectionist Models Summer School,CMU.
[5]
Poggio Tomaso and Fedrico Giorsi,'Networks for Approximation and Learning",
Proc. of IEEE, vol. 78, no. 9, September 1990, pp 1481- 1496.
[6]
Saba, Avijit ,D. S. Tang and Chuan-Lin Wu,."Dimension Reduction Using
Networks of Linear Superposition of Gaussian Units",MCC Technical Report.,
Sept. 1990.
[7]
Moody, John and Darken, Christen ... Learning with Localized Receptive Fields".
Proc. of the 1988 Connectionist Models Summer School, CMU.
| 415 |@word illustrating:1 polynomial:4 advantageous:1 suitably:1 grey:1 d2:1 fonn:5 tr:1 reduction:2 initial:1 series:4 tuned:2 kurt:1 bitmap:4 elliptical:1 dx:1 readily:3 john:2 subsequent:1 j1:1 shape:3 christian:4 girosi:1 designed:2 interpretable:1 update:1 plot:2 alone:1 tenn:1 ith:1 coarse:1 node:1 location:2 along:1 differential:1 become:1 compose:1 baldi:1 manner:1 introduce:1 ra:5 tomaso:1 themselves:1 behavior:1 growing:1 multi:1 curse:2 becomes:1 begin:1 underlying:1 bounded:1 what:1 evolved:1 pennits:1 developed:1 finding:1 transformation:3 corporation:1 every:1 multidimensional:1 transfonnations:1 unit:41 ly:1 organize:1 engineering:1 local:10 limit:1 despite:1 id:2 might:1 studied:1 responsible:1 backpropagation:1 mcc:1 significantly:1 radial:16 ila:1 get:1 close:1 context:3 influence:3 fonnula:1 center:8 convex:1 resolution:5 estimator:2 rule:6 array:1 spanned:1 variation:1 coordinate:1 limiting:1 hierarchy:1 target:1 element:6 recognition:2 hyperacuity:1 approximated:3 observed:3 electrical:1 capture:1 region:20 trained:1 basis:15 compactly:1 easily:2 differently:1 represented:5 tx:3 various:1 minimas:5 univ:1 lift:1 hyper:4 choosing:1 quite:1 heuristic:1 larger:2 valued:1 widely:2 say:1 statistic:1 advantage:1 rr:1 net:1 reconstruction:1 ath:1 alleviates:1 detennined:1 achieve:1 constituent:1 oo:1 linearize:1 widrow:1 nearest:1 ij:2 school:2 sa:1 indicate:1 implies:1 modifying:1 bin:2 generalization:7 perfonns:1 probable:1 pl:1 around:1 sufficiently:3 considered:2 consecutive:1 klb:1 purpose:2 estimation:1 proc:4 infonnation:3 superposition:2 sensor:1 gaussian:4 avoid:1 mil:1 yo:1 improvement:1 entire:2 hidden:16 i1:1 pixel:2 issue:1 orientation:1 exponent:2 special:1 field:14 equal:1 identical:1 placing:1 represents:2 connectionist:3 stimulus:1 report:1 primarily:1 saha:3 oriented:9 randomly:2 individual:3 replaced:1 lehr:1 behind:1 tuple:1 poggio:2 perfonnance:1 euclidean:2 desired:1 increased:1 column:1 instance:1 ar:1 jib:1 ordinary:2 clipping:1 deviation:2 delay:1 varies:1 my:1 adaptively:2 density:2 michael:1 quickly:1 moody:2 opposed:4 choose:1 matyas:1 derivative:1 bx:1 coding:7 root:1 reached:1 start:1 minimize:1 square:2 accuracy:5 correspond:4 yield:2 generalize:1 mere:1 multiplying:1 drive:1 stroke:1 definition:2 pp:4 associated:14 popular:1 knowledge:1 dimensionality:2 shaping:1 feed:1 higher:2 dt:1 wherein:1 response:13 box:1 though:1 until:1 saba:2 grbf:1 quality:1 perhaps:1 alb:3 effect:1 excessively:1 normalized:1 i2:1 illustrated:1 ll:2 width:9 whereby:1 generalized:1 image:26 consideration:1 recently:1 rotation:12 rl:1 exponentially:1 occurred:1 interpretation:1 nonn:1 significant:1 composition:1 ai:1 surface:5 irrelevant:1 certain:1 rep:1 arbitrarily:1 tenns:1 approximators:2 captured:1 minimum:4 greater:1 somewhat:1 converting:1 recognized:1 maximize:1 ud:1 signal:1 ii:1 smooth:4 technical:1 match:1 determination:1 cross:1 sphere:2 lin:2 divided:1 ellipsis:1 feasibility:1 prediction:1 metric:2 cmu:2 kernel:8 addition:3 grow:2 tend:2 extracting:2 mw:1 concerned:1 xj:1 architecture:2 restrict:1 idea:1 avenue:1 texas:2 algebraic:1 useful:1 detailed:1 chuan:2 unifonn:1 notice:2 popularity:1 shall:1 vol:3 four:1 sum:1 year:1 place:1 wu:5 architectural:1 eee:1 scaling:10 summer:2 quadratic:1 iam:1 occur:1 bp:1 nearby:1 balcones:1 relatively:2 circumvented:1 department:1 alternate:2 poor:1 smaller:1 remain:1 slightly:1 character:3 partitioned:1 restricted:1 invariant:1 xo:1 equation:2 remains:2 needed:1 detennine:1 gaussians:1 permit:1 hierarchical:2 pierre:1 pgiven:1 original:2 top:1 clustering:1 t11:1 sm2:1 chinese:2 objective:2 added:1 burning:1 receptive:14 september:1 gradient:2 distance:1 collected:1 transfonned:1 reason:1 illustration:1 rotational:1 madaline:1 difficult:1 sigma:1 rise:1 proper:1 darken:2 descent:2 nonnalized:1 jim:1 locate:1 tnt:2 pair:2 required:3 specified:1 deletion:1 address:1 rut:1 usually:1 pattern:2 hyperspheres:1 power:1 suitable:2 overlap:1 force:1 scheme:5 technology:1 sept:2 determining:4 approximator:3 baba:1 avijit:2 localized:2 validation:1 mackeyglass:1 sufficient:2 xp:2 consistent:1 translation:5 austin:5 row:2 elsewhere:2 christen:2 last:1 free:1 jth:1 allow:2 perceptron:1 neighbor:1 dimension:10 xn:1 forward:1 made:1 adaptive:2 collection:1 far:1 hypersurface:1 global:3 xi:1 alternatively:1 table:3 learn:2 composing:1 ignoring:1 linearly:2 motivation:2 repeated:1 obviating:1 west:1 en:1 position:1 xl:3 ib:1 tang:3 xt:1 list:1 microelectronics:1 grouping:1 te:1 corresponds:1 extracted:2 ma:2 rbf:9 considerable:1 change:1 determined:3 trot:1 vo1:1 principal:3 total:1 bernard:1 experimental:1 perceptrons:1 select:1 rotate:1 dept:1 correlated:1 |
3,479 | 4,150 | Probabilistic Multi-Task Feature Selection
1
Yu Zhang1 , Dit-Yan Yeung1 , Qian Xu2
Department of Computer Science and Engineering, 2 Bioengineering Program
Hong Kong University of Science and Technology
{zhangyu,dyyeung}@cse.ust.hk, [email protected]
Abstract
Recently, some variants of the ?1 norm, particularly matrix norms such as the ?1,2
and ?1,? norms, have been widely used in multi-task learning, compressed sensing and other related areas to enforce sparsity via joint regularization. In this
paper, we unify the ?1,2 and ?1,? norms by considering a family of ?1,? norms for
1 < ? ? ? and study the problem of determining the most appropriate sparsity
enforcing norm to use in the context of multi-task feature selection. Using the
generalized normal distribution, we provide a probabilistic interpretation of the
general multi-task feature selection problem using the ?1,? norm. Based on this
probabilistic interpretation, we develop a probabilistic model using the noninformative Jeffreys prior. We also extend the model to learn and exploit more general
types of pairwise relationships between tasks. For both versions of the model,
we devise expectation-maximization (EM) algorithms to learn all model parameters, including ?, automatically. Experiments have been conducted on two cancer
classification applications using microarray gene expression data.
1
Introduction
Learning algorithms based on ?1 regularization have a long history in machine learning and statistics.
A well-known property of ?1 regularization is its ability to enforce sparsity in the solutions. Recently,
some variants of the ?1 norm, particularly matrix norms such as the ?1,2 and ?1,? norms, were
proposed to enforce sparsity via joint regularization [24, 17, 28, 1, 2, 15, 20, 16, 18]. The ?1,2 norm
is the sum of the ?2 norms of the rows and the ?1,? norm is the sum of the ?? norms of the rows.
Regularizers based on these two matrix norms encourage row sparsity, i.e., they encourage entire
rows of the matrix to have zero elements. Moreover, these norms have also been used for enforcing
group sparsity among features in conventional classification and regression problems, e.g., group
LASSO [29]. Recently, they have been widely used in multi-task learning, compressed sensing and
other related areas. However, when given a specific application, we often have no idea which norm
is the most appropriate choice to use.
In this paper, we study the problem of determining the most appropriate sparsity enforcing norm
to use in the context of multi-task feature selection [17, 15]. Instead of choosing between specific choices such as the ?1,2 and ?1,? norms, we consider a family of ?1,? norms. We restrict ?
to the range 1 < ? ? ? to ensure that all norms in this family are convex, making it easier to
solve the optimization problem formulated based on it. Within this family, the ?1,2 and ?1,? norms
are just two special cases. Using the ?1,? norm, we formulate the general multi-task feature selection problem and give it a probabilistic interpretation. It is noted that the automatic relevance
determination (ARD) prior [9, 3, 26] comes as a special case under this interpretation. Based on
this probabilistic interpretation, we develop a probabilistic formulation using a noninformative prior
called the Jeffreys prior [10]. We devise an expectation-maximization (EM) algorithm [8] to learn
all model parameters, including ?, automatically. Moreover, an underlying assumption of existing
multi-task feature selection methods is that all tasks are similar to each other and they share the
same features. This assumption may not be correct in practice because there may exist outlier tasks
1
or tasks with negative correlation. As another contribution of this paper, we propose to use a matrix
variate generalized normal prior [13] for the model parameters to learn the relationships between
tasks. The task relationships learned here can be seen as an extension of the task covariance used
in [4, 32, 31]. Experiments will be reported on two cancer classification applications using microarray gene expression data.
2
Multi-Task Feature Selection
Suppose we are given ? learning tasks {?? }?
?=1 . For the ?th task ?? , the training set ?? consists
of ?? labeled data points in the form of ordered pairs (x?? , ??? ), ? = 1, . . . , ?? , with x?? ? ?? and
its corresponding output ??? ? ? if it is a regression problem and ??? ? {?1, 1} if it is a binary
classification problem. The linear function for ?? is defined as ?? (x) = w?? x + ?? . For applications
that need feature selection, e.g., document classification, the feature dimensionality is usually very
high and it has been found that linear methods usually perform better.
The objective functions of most existing multi-task feature selection methods [24, 17, 28, 1, 2, 15,
20, 16, 18] can be expressed in the following form:
??
? ?
?
?(??? , w?? x?? + ?? ) + ??(W),
(1)
?=1 ?=1
where W = (w1 , . . . , w? ), ?(?, ?) denotes the loss function (e.g., squared loss for regression and
hinge loss for classification), ?(?) is the regularization function that enforces feature sparsity under the multi-task setting, and ? is the regularization parameter controlling the relative contribution
of the empirical loss and the regularizer. Multi-task feature selection seeks to minimize the objective function above to obtain the optimal parameters {w? , ?? }. Two regularization functions are
widely used in existing multi-task feature selection methods. One of them is based on the ?1,2 norm
??
of W [17, 28, 1, 2, 16, 18]: ?(W) = ?=1 ?w? ?2 where ???? denotes the ?-norm (or ?? norm) of a
vector and w? denotes the ?th row of W. Another one is based on the ?1,? norm of W [24, 15, 20]:
??
?(W) = ?=1 ?w? ?? .
In this paper, we unify these two cases by using the ?1,? norm of W to define a more general
regularization function:
?(W) =
?
?
?w? ?? ,
1 < ? ? ?.
?=1
Note that when ? < 1, ?(W) is non-convex with respect to W. Although ?(W) is convex when
? = 1, each element of W is independent of each other and so the regularization function cannot
enforce feature sparsity. Thus we restrict the range to 1 < ? ? ?.
Even though restricting the range to 1 < ? ? ? can enforce feature sparsity between different tasks,
different values of ? imply different ?group discounts? for sharing the same feature. Specifically,
when ? approaches 1, the cost grows almost linearly with the number of tasks that use a feature, and
when ? = ?, only the most demanding task matters. So selecting a proper ? can potentially have a
significant effect on the performance of the learning algorithms.
In the following, we first give a probabilistic interpretation for multi-task feature selection methods.
Based on this probabilistic interpretation, we then develop a probabilistic model which, among other
things, can solve the model selection problem automatically by estimating ? from data.
3
Probabilistic Interpretation
In this section, we will show that existing multi-task feature selection methods are related to the
maximum a posteriori (MAP) solution of a probabilistic model. This probabilistic interpretation
sets the stage for introducing our probabilistic model in the next section.
We first introduce the generalized normal distribution [11] which is useful for the model to be introduced.
2
Definition 1 ? is a univariate generalized normal random variable iff its probability density function (p.d.f.) is given as follows:
?(?) =
( ?? ? ??? )
1
?
,
1 exp
??
2??(1 + ? )
where ?(?) denotes the Gamma function and ? ? ? denotes the absolute value of a scalar.
For simplicity, if ? is a univariate generalized normal random variable, we write ? ? ?? (?, ?, ?).
The (ordinary) normal distribution can be viewed as a special case of the generalized normal distribution when ? = 2 and the Laplace distribution is a special case when ? = 1. When ? approaches +?,
the generalized normal distribution approaches the uniform distribution in the range [? ? ?, ? + ?].
The generalized normal distribution has proven useful in Bayesian analysis and robustness studies.
Definition 2 A standardized ? ? 1 multivariate generalized normal random variable z =
(?1 , . . . , ?? )? consists of ? independent and identically distributed (i.i.d.) univariate generalized
normal random variables.
If z is a standardized ? ? 1 multivariate generalized normal random variable, we write z ?
??? (?, ?, ?) with the following p.d.f.:
?(z) = [
(
1
]
exp
?
?
2??(1 + 1? )
??
?=1
??? ? ??? )
.
??
With these definitions, we now begin to present our probabilistic interpretation for multi-task feature
selection by proposing a probabilistic model. For notational simplicity, we assume that all tasks
perform regression. Extension to include classification tasks will go through similar derivation.
For a regression problem, we use the normal distribution to define the likelihood for x?? :
??? ? ? (w?? x?? + ?? , ? 2 ),
(2)
where ? (?, ?2 ) denotes the (univariate) normal distribution with mean ? and variance ?2 .
We impose the generalized normal prior on each element of W:
??? ? ?? (0, ?? , ?),
(3)
where ??? is the (?, ?)th element of W (or, equivalently, the ?th element of w? or the ?th element
of w? ). Then we can express the prior on w? as
(w? )? ? ??? (0, ?? , ?).
When ? = 2, this becomes the ARD prior [9, 3, 26] commonly used in Bayesian methods for
enforcing sparsity. From this view, the generalized normal prior can be viewed as a generalization
of the ARD prior.
With the above likelihood and prior, we can obtain the MAP solution of W by solving the following
problem:
min ? =
W,b,?
? ??
? (
)
?
?w? ???
1 ??
?
? ?
?(?
,
w
x
+
?
)
+
+
?
ln
?
,
?
?
?
?
?
?
? 2 ?=1 ?=1
??
?=1
(4)
where b = (?1 , . . . , ?? )? and ? = (?1 , . . . , ?? )? .
We set the derivative of ? with respect to ?? to zero and get
?? =
( ? )1/?
?w? ?? .
?
Plugging this into problem (4), the optimization problem can be reformulated as
min ? =
W,b
? ??
?
?
1 ??
?(??? , w?? x?? + ?? ) + ?
ln ?w? ?? .
2
? ?=1 ?=1
?=1
(5)
Note that problem (5) is non-convex since the second term is non-convex with respect to W. Because ln ? ? ? ? 1 for any ? > 0, problem (5) can be relaxed to problem (1) by setting ? = ?? 2 .
3
So the solutions of multi-task feature selection methods can be viewed as the solution of the relaxed
optimization problem above. In many previous works such as [5, 27], ln(?) can be used as an approximation of ?(? ?= 0) where ?(?) is an indicator function. Using this view, we can regard the
second term in problem (5) as an approximation of the number of rows with nonzero ?-norms.
Note that we can directly solve problem (5) using a majorization-minimization (MM) algorithm [14].
For numerical stability, we can slightly modify the objective function in problem (5) by replacing
??
the second term with ? ?=1 ln(?w? ?? +?) where ? can be regarded as a regularization parameter.
?
We denote the solution obtained in the ?th iteration as w(?)
. In the (? + 1)th iteration, due to the
concavity property of ln(?), we can bound the second term in problem (5) as follows
?
?
?
ln(?w ?? + ?) ?
?=1
?
?
[
?
ln(?w(?)
??
?=1
]
?
?w? ?? ? ?w(?)
??
+ ?) +
.
?
?w(?)
?? + ?
Thus, in the (? + 1)th iteration, we need to solve a weighted version of problem (1):
min
W,b
? ??
?
?
?w? ??
1 ??
?
? ?
.
?(?
,
w
x
+
?
)
+
?
?
?
?
?
?
2
? ?=1 ?=1
?w(?)
?? + ?
?=1
According to [14], the MM algorithm is guaranteed to converge to a local optimum.
4
A Probabilistic Framework for Multi-Task Feature Selection
In the probabilistic interpretation above, we use a type II method to estimate {?? } in the generalized
normal prior which can be viewed as a generalization of the ARD prior. In the ARD prior, according
to [19], this approach is likely to lead to overfitting because the hyperparameters in the ARD prior
are treated as points. Similar to the ARD prior, the model in the above section may overfit since {?? }
are estimated via point estimation. In the following, we will present our probabilistic framework for
multi-task feature selection by imposing priors on the hyperparameters.
4.1
The Model
As in the above section, the likelihood for x?? is also defined based on the normal distribution:
??? ? ? (w?? x?? + ?? , ??2 ).
(6)
Here we use different noise variances ?? for different tasks to make our model more flexible. The
prior on W is also defined similarly:
??? ? ?? (0, ?? , ?).
(7)
The main difference here is that we treat ?? as a random variable with the noninformative Jeffreys
prior:
?
?(?? ) ?
?
?(?? ) =
?w? ???
[( ? ln ?(w? ?? ) )2 ]
1
?
? ,
???
??
(8)
where ?(?? ) denotes the Fisher information for ?? and ?? [?] denotes the expectation with respect
to ?. One advantage of using the Jeffreys prior is that the distribution has no hyperparameters.
4.2
Parameter Learning and Inference
Here we use the EM algorithm [8] to learn the model parameters. In our model, we denote ? =
{W, b, {?? }, ?} as the model parameters and ? = (?1 , . . . , ?? )? as the hidden variables.
In the E-step, we construct the so-called ?-function as the surrogate for the log-likelihood:
?(???(?) ) =
?
ln ?(??y, ?)?(??y, ?(?) )??,
where ?(?) denotes the estimate of ? in the ?th iteration and y = (?11 , . . . , ???? )? . It is easy to
show that
ln ?(??y, ?) ? ln ?(y?W, {?? }) + ln ?(W??)
[?
]
?
?
?
?
?
?
?
(??? ? w?? x?? ? ?? )2
?? ln ??2
1 ?
1
??
+
?
???? ?? ? ?? ln ?(1 + )
?
2
2?
2
?
?
?
?=1 ? ?=1
?=1 ?=1
4
and ?(??y, ?(?) ) ?
(
??
?=1
)
?
?(?? )?(w(?)
??? ) . We then compute ?[ ?1? ?y, ?(?) ] as
?
?? 1
?
]
[1
?(?? )?(w(?)
??? )???
0 ??
?
(?)
?
= ??
? ? y, ?
=
?.
?
?
??
??w
?(?
)?(w
??
)??
?
?
(?) ??
(?) ?
0
So we can get
(?)
?(???
where ?? =
]
[?
?
?
?
?
?
?
?
?
(??? ? w?? x?? ? ?? )2
?? ln ??2
1
?
+
?
)=?
???? ?? ? ?? ln ?(1 + ),
?
2
2?
2
?
?
?=1
?=1
?=1 ?=1
?
? ?? .
??w(?)
?
In the M-step, we maximize ?(???(?) ) to update the estimates of W, b, {?? } and ?.
For the estimation of W, we need to solve ? convex optimization problems
min ? = ?0 ??
y? ? X?? w? ?22 +
w?
?
?
?? ???? ?? ,
? = 1, . . . , ?,
(9)
?=1
(?)
(?)
where y
?? = (?1? ? ?? , . . . , ??? ? ? ?? )? , X? = (x?1 , . . . , x??? ), and ?0 =
1
.
(?)
2(?? )2
When ? = 2,
this becomes the conventional ridge regression problem. Here ?? is related to the sparsity of the
?th row in W(?) : the more sparse the ?th row in W(?) , the larger the ?? . When ?? is large, ???
will be enforced to approach 0. We use a gradient method such as conjugate gradient to optimize
problem (9). The subgradient with respect to w? is
(
)
??
= 2?0 X? X?? w? ? X? y
?? + ??,
?w?
where ? = (?1 ??1? ???1 sign(?1? ), . . . , ?? ???? ???1 sign(??? ))? and sign(?) denotes the sign function.
We set the derivatives of ?(???(?) ) with respect to ?? and ?? to 0 and get
(?+1)
=
(?+1)
=
??
??
??
]
1 ?[ ?
(?+1) ? ?
) x?
?? ? (w?
?? ?=1
v
u ?
]2
u 1 ?? [
(?+1)
(?+1) ? ?
?
.
) x ? ? ??
??? ? (w?
?? ?=1
For the estimation of ?, we also use a gradient method. The gradient can be calculated as
?
?
?
?
??
=?
??
??
?=1
(?+1)
?=1,???
where ?(?) ?
4.3
? ln ?(?)
??
(?+1) ? (?+1) ??
?? 1
?
ln ?
+
+ 2 ?( ),
??
??
?
?
?
?=0
is the digamma function.
Extension to Deal with Outlier Tasks and Tasks with Negative Correlation
An underlying assumption of multi-task feature selection using the ?1,? norm is that all tasks are
similar to each other and they share the same features. This assumption may not be correct in
practice because there may exist outlier tasks (i.e., tasks that are not related to all other tasks) or
tasks with negative correlation (i.e., tasks that are negatively correlated with some other tasks). In
this section, we will discuss how to extend our probabilistic model to deal with these tasks.
We first introduce the matrix variate generalized normal distribution [13] which is a generalization
of the generalized normal distribution to random matrices.
Definition 3 A matrix Z ? ???? is a matrix variate generalized normal random variable iff its p.d.f.
is given as follows:
? ]
? ?
? ?
? ?
? ?1
?
?1
?(Z?M, ?, ?, ?) = [
exp ?
(? )?? (??? ? ??? )(? )?? ,
]
1 ??
?
?
2?(1 + ? ) det(?) det(?)
?=1 ?=1 ?=1 ?=1
[
1
where ? ? ???? and ? ? ???? are nonsingular, det(?) denotes the determinant of a square matrix,
??? is the (?, ?)th element of matrix A and (??1 )?? is the (?, ?)th element of the matrix inverse A?1 .
5
We write Z ? ???? (M, ?, ?, ?) for a matrix variate generalized normal random variable Z.
When ? = 2, the matrix variate generalized normal distribution becomes the (ordinary) matrix
variate normal distribution [12] with row covariance matrix ??? and column covariance matrix
??? , which has been used before in multi-task learning [4, 32, 31]. From this view, ? is used
to model the relationships between the rows of Z and ? is to model the relationships between the
columns.
We note that the prior on W in Eq. (7) can be written as
(
)
W ? ???? (0, diag (?1 , . . . , ?? )? , I? , ?),
where 0 denotes a zero vector or matrix of proper size, I? denotes the ? ? ? identity matrix and
diag(?) converts a vector into a diagonal matrix. In this formulation, it can be seen that the columns
of W (and hence the tasks) are independent of each other. However, the tasks are in general not
independent. So we propose to use a new prior on W:
(
)
W ? ???? (0, diag (?1 , . . . , ?? )? , ?, ?),
(10)
where ? models the pairwise relationships between tasks.
The likelihood is still based on the normal distribution. Since in practice the relationships between
tasks are not known in advance, we also need to estimate ? from data.
For parameter learning, we again use the EM algorithm to learn the model parameters. Here the
model parameters are denoted as ? = {W, b, {?? }, ?, ?}. It is easy to show that
ln ?(??y, ?) ? ?
??
? [?
?
? ?
?
?
(??? ? w?? x?? ? ?? )2
?? ln ??2 ] ? 1 ? ?
+
?
??? (??1 )??
?
2
2?
2
?
?
?=1 ?=1
?=1 ? ?=1
?=1
1
? ?? ln ?(1 + ) ? ? ln det(?).
?
Then we compute ?[ ?1? ?y, ?(?) ] as
?
?? 1
?
]
[1
?(?? )?(w(?)
??? )???
0 ??
?
(?)
(?)
?
?
=
= ?? ??
? ? y, ?
) ? ? ?? .
?
?
(?) (
(?)
?1
??
??
)??
?(?
)?(w
?
?
?
? ?=1
(? )
(?)
0
?=1 ???
??
In the E-step, the ?-function can be formulated as
??
? ?
?
? [?
?
?
(??? ? w?? x?? ? ?? )2
?? ln ??2 ] ? (?) ? ?
?1
?
?
(?
)
+
?
??
??
?
2
2?
2
?
?=1
?=1
?=1 ?=1
?(???(?) ) = ?
?=1
1
? ?? ln ?(1 + ) ? ? ln det(?).
?
In the M-step, for W and ?, the optimization problem becomes
min
W,?
where
(?)
??
=
?
?
(?)
??
?=1
?=1
1
(?)
2(??
?=1
?=1
?=1
? = W??1 to rewrite the above problem as
. We define a new variable W
2
)
min ? =
?
W,?
??
? ?
?
?
?
?
?
?
(?)
??
??? (??1 )?? + ? ln det(?),
(?
??? ? w?? x?? )2 +
?
?
(?)
??
?=1
??
?
?
?
?
?
(?)
? ? x?? )2 +
(?
??? ? e?? ?? W
??
??
??? ?? + ? ln det(?),
?=1
?=1
?=1
where e? denotes the ?th column of the ? ? ? identity matrix. We use an alternating method to
? is a convex problem and we use
solve this problem. For a fixed ?, the problem with respect to W
conjugate gradient to solve it with the following subgradient
?
??
?=1
?=1
]
? (?) ? [ ? ? ?
??
?
?
? ? ?
?
?
=2
??
x? (x? ) W?e
+ ?M,
? e? ? ? ?? x? e? ?
?
?W
(?)
? we
where M is a ? ? ? matrix with the (?, ?)th element ?? ??
??? ???1 sign(?
??? ). For a fixed W,
also use conjugate gradient with the following gradient
?? [
?
]
?
?
??
(?)
?
? ? ? ? ?
? ? x?? (x?? )? W?e
?
=2
??
W
+ ?(?? )?1 .
? e? ? ?? W x? e?
??
?=1
?=1
? ? and ?? , we can compute the optimal W? as W? = W
? ? ?? . The
After obtaining the optimal W
update rules for {?? }, {?? } and ? are similar to those in the above section.
6
5
Related Work
Some probabilistic multi-task feature selection methods have been proposed before [28, 2]. However, they only focus on the ?1,2 norm. Moreover, they use point estimation in the ARD prior and
hence, as discussed in Section 3, are susceptible to overfitting [19].
Zhang et al. [30] proposed a latent variable model for multi-task learning by using the Laplace prior
to enforce sparsity. This is equivalent to using the ?1,1 norm in our framework which, as discussed
above, cannot enforce group sparsity among different features over all tasks.
6
Experiments
In this section, we study our methods empirically on two cancer classification applications using
microarray gene expression data. We compare our methods with three related methods: multi-task
feature learning (MTFL) [1]1 , multi-task feature selection using ?1,2 regularization [16]2 , and multitask feature selection using ?1,? regularization [20]3 .
6.1
Breast Cancer Classification
We first conduct empirical study on a breast cancer classification application. This application consists of three learning tasks with data collected under different platforms [21]. The dataset for the
first task, collected at the Koo Foundation Sun Yat-Sen Cancer Centre in Taipei, contains 89 samples with 8948 genes per sample. The dataset for the second task, obtained from the Netherlands
Cancer Institute, contains 97 samples with 16360 genes per sample. Most of the patients in this
dataset had stage I or II breast cancer. The dataset for the third task, obtained using 22K Agilent
oligonucleotide arrays, contains 114 samples with 12065 genes per sample. Even though these three
datasets were collected under different platforms, they share 6092 common genes which are used in
our experiments.
Here we abbreviate the method in Section 4.2 as PMTFS1 and that in Section 4.3 as PMTFS2. For
each task, we choose 70% of the data for training and the rest for testing. We perform 10 random
splits of the data and report the mean and standard derivation of the classification error over the
10 trials. The results are summarized in Table 1. It is clear that PMTFS1 outperforms the three
previous methods, showing the effectiveness of our more general formulation with ? determined
automatically. Moreover, we also note that PMTFS2 is better than PMTFS1. This verifies the
usefulness of exploiting the relationships between tasks in multi-task feature selection. Since our
methods can estimate ? automatically, we compute the mean of the estimated ? values over 10 trials.
The means for PMTFS1 and PMTFS2 are 2.5003 and 2.6718, respectively, which seem to imply that
smaller values of ? are preferred for this application. This probably explains why the performance
of MTFS1,? is not good when compared with other methods.
Table 1: Comparison of different methods on the breast cancer classification application in terms of
classification error rate (in mean?std-dev). Each column in the table represents one task.
Method
MTFL
MTFS1,2
MTFS1,?
PMTFS1
PMTFS2
6.2
1st Task
0.3478?0.1108
0.3370?0.0228
0.3896?0.0583
0.3072?0.0234
0.2870?0.0228
2nd Task
0.0364?0.0345
0.0343?0.0134
0.1136?0.0579
0.0298?0.0121
0.0273?0.0102
3rd Task
0.3091?0.0498
0.2855?0.0337
0.2909?0.0761
0.1786?0.0245
0.1455?0.0263
Prostate Cancer Classification
We next study a prostate cancer classification application consisting of two tasks. The Singh
dataset [22] for the first task is made up of laser intensity images from each microarray. The RMA
preprocessing method was used to produce gene expression values from these images. On the other
1
http://ttic.uchicago.edu/?argyriou/code/index.html
http://www.public.asu.edu/?jye02/Software/SLEP/index.htm
3
http://www.lsi.upc.edu/?aquattoni/
2
7
hand, the Welsh dataset [25] for the second task is already in the form of gene expression values.
Even though the collection techniques for the two datasets are different, they have 12600 genes in
common and are used in our experiments.
The experimental setup for this application is similar to that in the previous subsection, that is, 70%
of the data of each task are used for training and the rest for testing, and 10 random splits of the data
are performed. We report the mean and standard derivation of the classification error over the 10
trials in Table 2. As in the first set of experiments, PMTFS1 and PMTFS2 are better than the other
three methods compared and PMTFS2 slightly outperforms PMTFS1. The means of the estimated
? values for PMTFS1 and PMTFS2 are 2.5865 and 2.6319, respectively. So it seems that smaller
values are also preferred for this application.
Table 2: Comparison of different methods on the prostate cancer classification application in terms
of classification error rate (in mean?std-dev). Each column in the table represents one task.
Method
MTFL
MTFS1,2
MTFS1,?
PMTFS1
PMTFS2
7
1st Task
0.1226?0.0620
0.1232?0.0270
0.2216?0.1667
0.1123?0.0170
0.1032?0.0136
2nd Task
0.3500?0.0085
0.3420?0.0067
0.4200?0.1304
0.3214?0.0053
0.3000?0.0059
Concluding Remarks
In this paper, we have proposed a probabilistic framework for general multi-task feature selection
using the ?1,? norm (1 < ? ? ?). Our model allows the optimal value of ? to be determined
from data automatically. Besides considering the case in which all tasks are similar, we have also
considered the more general and challenging case in which there also exist outlier tasks or tasks with
negative correlation.
Compressed sensing aims at recovering the sparse signal w from a measurement vector b = Aw for
a given matrix A. Compressed sensing can be extended to the multiple measurement vector (MMV)
model in which the signals are represented as a set of jointly sparse vectors sharing a common set
of nonzero elements [7, 6, 23]. Specifically, joint compressed sensing considers the reconstruction
of the signal represented by a matrix W, which is given by a dictionary (or measurement matrix) A
and multiple measurement vector B such that B = AW. Similar to multi-task feature selection,
we can use ?W?1,? to enforce the joint sparsity in W. Since there usually exists noise in the data,
the optimization problem of MMV can be formulated as: minW ??W?1,? + ?AW ? B?22 . This
problem is almost identical to problem (1) except that the loss defines the reconstruction error rather
than the prediction error. So we can use the probabilistic model presented in Section 4 to develop
a probabilistic model for joint compressed sensing. Besides, we are also interested in developing a
full Bayesian version of our model to further exploit the advantages of Bayesian modeling.
Acknowledgment
This research has been supported by General Research Fund 622209 from the Research Grants
Council of Hong Kong.
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning,
73(3):243?272, 2008.
[2] J. Bi, T. Xiong, S. Yu, M. Dundar, and R. B. Rao. An improved multi-task learning approach with
applications in medical diagnosis. In ECMLPKDD, 2008.
[3] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006.
[4] E. Bonilla, K. M. A. Chai, and C. Williams. Multi-task Gaussian process prediction. In NIPS 20, 2008.
[5] E. J. Cand`es, M. B. Wakin, and S. P. Boyd. Enhancing sparsity by reweighted ?1 minimization. Journal
of Fourier Analysis and Applications, 14(5):877?905, 2008.
8
[6] J. Chen and X. Huo. Theoretical results on sparse representations of multiple-measurement vectors. IEEE
Transactions on Signal Processing, 54(12):4634?4643, 2006.
[7] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado. Sparse solutions to linear inverse problems
with multiple measurement vectors. IEEE Transactions on Signal Processing, 53(7):2477?2488, 2005.
[8] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistic Society, B, 39(1):1?38, 1977.
[9] M. A. T. Figueiredo. Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 25(9):1150?1159, 2003.
[10] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman & Hall, 2nd
edition, 2003.
[11] I. R. Goodman and S. Kotz. Multivariate ?-generalized normal distributions. Journal of Multivariate
Analysis, 3(2):204?219, 1973.
[12] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman & Hall, 2000.
[13] A. K. Gupta and T. Varga. Matrix variate ?-generalized normal distribution. Transactions of The American
Mathematical Society, 347(4):1429?1437, 1995.
[14] K. Lange, D. R. Hunter, and I. Yang. Optimization transfer using surrogate objective functions. Journal
of Computational and Graphical Statistics, 9(1):1?59, 2000.
[15] H. Liu, M. Palatucci, and J. Zhang. Blockwise coordinate descent procedures for the multi-task lasso,
with applications to neural semantic basis discovery. In ICML, 2009.
[16] J. Liu, S. Ji, and J. Ye. Multi-task feature learning via efficient ?2,1 -norm minimization. In UAI, 2009.
[17] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. Technical report, Department of
Statistics, University of California, Berkeley, June 2006.
[18] G. Obozinski1, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for
multiple classification problems. Statistics and Computing, 20(2):231?252, 2010.
[19] Y. Qi, T. P. Minka, R. W. Picard, and Z. Ghahramani. Predictive automatic relevance determination by
expectation propagation. In ICML, 2004.
[20] A. Quattoni, X. Carreras, M. Collins, and T. Darrell. An efficient projection for ?1,? regularization. In
ICML, 2009.
[21] A. A. Shabalin, H. Tjelmeland, C. Fan, C. M. Perou, and A. B. Nobel. Merging two gene-expression
studies via cross-platform normalization. Bioinformatics, 24(9):1154?1160, 2008.
[22] D. Singh, P. G. Febbo, K. Ross, D. G. Jackson, J. Manola, C. Ladd, P. Tamayo, A. A. Renshaw, A. V.
DAmico, J. P. Richie, E. S. Lander, M. Loda, P. W. Kantoff, T. R. Golub, and W. R.Sellers. Gene
expression correlates of clinical prostate cancer behavior. Cancer Cell, 1(2):203?209, 2002.
[23] L. Sun, J. Liu, J. Chen, and J. Ye. Efficient recovery of jointly sparse vectors. In NIPS 22. 2009.
[24] B. A. Turlach, W. N. Wenables, and S. J. Wright. Simultaneous variable selection. Technometrics,
47(3):349?363, 2005.
[25] J. B. Welsh, L. M. Sapinoso, A. I. Su, S. G. Kern, J. Wang-Rodriguez, C. A. Moskaluk, F. H. Frierson,
Jr., and G. M. Hampton. Analysis of gene expression identifies candidate markers and pharmacological
targets in prostate cancer. Cancer Research, 61(16):5974?5978, 2001.
[26] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In NIPS 20, 2007.
[27] D.P. Wipf and S. Nagarajan. Iterative reweighted ?1 and ?2 methods for finding sparse solutions. Journal
of Selected Topics in Signal Processing, 2010.
[28] T. Xiong, J. Bi, B. Rao, and V. Cherkassky. Probabilistic joint feature selection for multi-task learning.
In SDM, 2007.
[29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society, Series B, 2006.
[30] J. Zhang, Z. Ghahramani, and Y. Yang. Flexible latent variable models for multi-task learning. Machine
Learning, 73(3):221?242, 2008.
[31] Y. Zhang and D.-Y. Yeung. A convex formulation for learning task relationships in multi-task learning.
In UAI, 2010.
[32] Y. Zhang and D.-Y. Yeung. Multi-task learning using generalized ? process. In AISTATS, 2010.
9
| 4150 |@word multitask:1 kong:2 determinant:1 version:3 trial:3 norm:34 seems:1 nd:3 turlach:1 tamayo:1 seek:1 covariance:3 delgado:1 liu:3 contains:3 series:1 selecting:1 document:1 outperforms:2 existing:4 ust:2 written:1 numerical:1 noninformative:3 update:2 mtfl:3 fund:1 intelligence:1 asu:1 selected:1 huo:1 renshaw:1 cse:1 zhang:5 mathematical:1 yuan:1 consists:3 introduce:2 pairwise:2 behavior:1 cand:1 multi:38 automatically:6 considering:2 becomes:4 begin:1 estimating:1 moreover:4 underlying:2 proposing:1 finding:1 berkeley:1 grant:1 zhang1:1 medical:1 before:2 engineering:1 local:1 modify:1 treat:1 koo:1 challenging:1 range:4 bi:2 acknowledgment:1 enforces:1 testing:2 practice:3 procedure:1 pontil:1 area:2 empirical:2 yan:1 boyd:1 projection:1 kern:1 get:3 cannot:2 selection:31 gelman:1 context:2 optimize:1 conventional:2 map:2 equivalent:1 www:2 go:1 williams:1 convex:9 formulate:1 unify:2 simplicity:2 recovery:1 qian:1 rule:1 array:1 regarded:1 jackson:1 stability:1 coordinate:1 laplace:2 controlling:1 suppose:1 target:1 element:10 recognition:1 particularly:2 std:2 labeled:1 taskar:2 wang:1 sun:2 dempster:1 seller:1 singh:2 solving:1 rewrite:1 predictive:1 negatively:1 basis:1 htm:1 joint:8 represented:2 regularizer:1 derivation:3 laser:1 choosing:1 widely:3 solve:7 larger:1 compressed:6 ability:1 statistic:5 jointly:2 laird:1 advantage:2 sdm:1 sen:1 propose:2 reconstruction:2 rma:1 iff:2 exploiting:1 chai:1 optimum:1 darrell:1 produce:1 develop:4 ard:8 eq:1 recovering:1 come:1 correct:2 hampton:1 public:1 explains:1 nagarajan:2 generalization:3 extension:3 mm:2 considered:1 hall:2 normal:27 exp:3 wright:1 dictionary:1 estimation:5 ross:1 council:1 grouped:1 weighted:1 cotter:1 minimization:3 gaussian:1 aim:1 rather:1 focus:1 june:1 shabalin:1 notational:1 likelihood:6 hk:2 digamma:1 posteriori:1 inference:1 entire:1 hidden:1 interested:1 classification:19 among:3 flexible:2 denoted:1 html:1 platform:3 special:4 construct:1 evgeniou:1 chapman:2 identical:1 represents:2 yu:2 icml:3 wipf:2 report:3 prostate:5 gamma:1 consisting:1 welsh:2 technometrics:1 picard:1 golub:1 regularizers:1 bioengineering:1 encourage:2 minw:1 conduct:1 incomplete:1 theoretical:1 column:6 modeling:1 rao:3 dev:2 maximization:2 ordinary:2 cost:1 introducing:1 uniform:1 usefulness:1 conducted:1 slep:1 reported:1 aw:3 st:2 density:1 probabilistic:25 w1:1 squared:1 again:1 choose:1 american:1 derivative:2 tjelmeland:1 summarized:1 matter:1 bonilla:1 performed:1 view:4 contribution:2 minimize:1 majorization:1 square:1 variance:2 nonsingular:1 bayesian:5 hunter:1 history:1 quattoni:1 simultaneous:1 sharing:2 definition:4 minka:1 richie:1 dataset:6 subsection:1 dimensionality:1 agilent:1 supervised:1 improved:1 formulation:4 though:3 just:1 stage:2 correlation:4 overfit:1 hand:1 replacing:1 su:1 marker:1 propagation:1 rodriguez:1 defines:1 yat:1 grows:1 effect:1 ye:2 regularization:13 hence:2 alternating:1 nonzero:2 semantic:1 deal:2 reweighted:2 pharmacological:1 noted:1 hong:2 generalized:22 ridge:1 image:2 recently:3 common:3 empirically:1 ji:1 extend:2 interpretation:11 discussed:2 significant:1 measurement:6 imposing:1 automatic:3 rd:1 similarly:1 centre:1 had:1 carreras:1 multivariate:4 nagar:1 dyyeung:1 binary:1 devise:2 seen:2 relaxed:2 impose:1 converge:1 maximize:1 signal:6 ii:2 multiple:5 full:1 technical:1 determination:3 cross:1 long:1 clinical:1 lin:1 plugging:1 qi:1 prediction:2 variant:2 regression:7 breast:4 patient:1 expectation:4 enhancing:1 yeung:2 iteration:4 palatucci:1 normalization:1 cell:1 lander:1 microarray:4 goodman:1 rest:2 probably:1 thing:1 dundar:1 effectiveness:1 seem:1 jordan:2 yang:2 split:2 identically:1 easy:2 variate:8 carlin:1 lasso:2 restrict:2 lange:1 idea:1 det:7 expression:8 engan:1 reformulated:1 york:1 remark:1 useful:2 varga:1 clear:1 netherlands:1 discount:1 dit:1 http:3 febbo:1 exist:3 lsi:1 sign:5 estimated:3 per:3 diagnosis:1 write:3 express:1 group:4 subgradient:2 sum:2 convert:1 enforced:1 inverse:2 oligonucleotide:1 family:4 almost:2 kotz:1 bound:1 guaranteed:1 fan:1 software:1 fourier:1 min:6 concluding:1 department:2 developing:1 according:2 conjugate:3 jr:1 smaller:2 slightly:2 em:5 making:1 jeffreys:4 outlier:4 ln:28 discus:1 enforce:8 appropriate:3 xiong:2 robustness:1 standardized:2 denotes:14 ensure:1 include:1 ecmlpkdd:1 graphical:1 wakin:1 hinge:1 exploit:2 taipei:1 ghahramani:2 society:3 objective:4 xu2:1 already:1 diagonal:1 surrogate:2 gradient:7 subspace:1 topic:1 collected:3 considers:1 nobel:1 enforcing:4 code:1 besides:2 index:2 relationship:9 equivalently:1 setup:1 susceptible:1 mmv:2 potentially:1 blockwise:1 negative:4 proper:2 stern:1 perform:3 datasets:2 descent:1 extended:1 intensity:1 ttic:1 introduced:1 pair:1 california:1 learned:1 nip:3 perou:1 usually:3 pattern:2 sparsity:16 program:1 including:2 royal:2 demanding:1 treated:1 indicator:1 abbreviate:1 technology:1 imply:2 identifies:1 prior:24 discovery:1 determining:2 relative:1 loss:5 proven:1 foundation:1 rubin:2 share:3 row:10 cancer:16 supported:1 figueiredo:1 uchicago:1 institute:1 absolute:1 sparse:7 distributed:1 regard:1 calculated:1 concavity:1 commonly:1 made:1 preprocessing:1 collection:1 adaptive:1 transaction:4 correlate:1 preferred:2 gene:13 overfitting:2 uai:2 kreutz:1 latent:2 iterative:1 why:1 table:6 ladd:1 learn:6 transfer:1 obtaining:1 diag:3 aistats:1 main:1 linearly:1 noise:2 hyperparameters:3 upc:1 edition:1 verifies:1 candidate:1 third:1 specific:2 bishop:1 covariate:1 showing:1 sensing:6 gupta:2 jye02:1 exists:1 restricting:1 merging:1 sparseness:1 chen:2 easier:1 cherkassky:1 univariate:4 likely:1 expressed:1 ordered:1 scalar:1 springer:1 obozinski:1 viewed:4 formulated:3 identity:2 fisher:1 specifically:2 determined:2 except:1 called:2 experimental:1 e:1 collins:1 relevance:3 bioinformatics:1 argyriou:2 correlated:1 |
3,480 | 4,151 | Construction of Dependent Dirichlet Processes
based on Poisson Processes
Dahua Lin
CSAIL, MIT
[email protected]
Eric Grimson
CSAIL, MIT
[email protected]
John Fisher
CSAIL, MIT
[email protected]
Abstract
We present a novel method for constructing dependent Dirichlet processes. The
approach exploits the intrinsic relationship between Dirichlet and Poisson processes in order to create a Markov chain of Dirichlet processes suitable for use
as a prior over evolving mixture models. The method allows for the creation, removal, and location variation of component models over time while maintaining
the property that the random measures are marginally DP distributed. Additionally, we derive a Gibbs sampling algorithm for model inference and test it on both
synthetic and real data. Empirical results demonstrate that the approach is effective in estimating dynamically varying mixture models.
1
Introduction
As the cornerstone of Bayesian nonparametric modeling, Dirichlet processes (DP) [22] have been
applied to a wide variety of inference and estimation problems [3, 10, 20] with Dirichlet process
mixtures (DPMs) [15, 17] being one of the most successful. DPMs are a generalization of finite
mixture models that allow an indefinite number of mixture components. The traditional DPM model
assumes that each sample is generated independently from the same DP. This assumption is limiting
in cases when samples come from many, yet dependent, DPs. HDPs [23] partially address this
modeling aspect by providing a way to construct multiple DPs implicitly depending on each other
via a common parent. However, their hierarchical structure may not be appropriate in some problems
(e.g. temporally varying DPs).
Consider a document model where each document is generated under a particular topic and each
topic is characterized by a distribution over words. Over time, topics change: some old topics fade
while new ones emerge. For each particular topic, the word distribution may evolve as well. A
natural approach to model such topics is to use a Markov chain of DPs as a prior, such that the DP
at each time is generated by varying the previous one in three possible ways: creating a new topic,
removing an existing topic, and changing the word distribution of a topic.
Since MacEachern introduced the notion of dependent Dirichlet processes (DDP) [12], a variety of DDP constructions have been developed, which are based on either weighted mixtures of
DPs [6, 14, 18], generalized Chinese restaurant processes [4, 21, 24], or the stick breaking construction [5, 7]. Here, we propose a fundamentally different approach, taking advantage of the intrinsic
relationship between Dirichlet processes and Poisson processes: a Dirichlet process is a normalized Gamma process, while a Gamma process is essentially a compound Poisson process. The key
idea is motivated by the following: observations that preserve complete randomness when applied
to Poisson processes result in a new process that remains Poisson. Consequently, one can obtain
a Dirichlet process which is dependent on other DPs by applying such operations to their underlying compound Poisson processes. In particular, we discuss three specific operations: superposition,
subsampling, and point transition. We develop a Markov chain of DPs by combining these operations, leading to a framework that allows creation, removal, and location variation of particles. This
1
construction inherently comes with an elegant property that the random measure at each time is
marginally DP distributed. Our approach relates to previous efforts in constructing dependent DPs
while overcoming inherent limitations. A detailed comparison is given in section 4.
2
Poisson, Gamma, and Dirichlet Processes
Our construction of dependent Dirichlet processes rests upon the connection between Poisson,
Gamma, and Dirichlet processes, as well as the concept of complete randomness. We briefly review these concepts; Kingman [9] provides a detailed exposition of the relevant theory.
Let (?, F? ) be a measurable space, and ? be a random point process on ?. Each realization of ?
uniquely corresponds to a counting measure N? defined by N? (A) , #(? ? A) for each A ? F? .
Hence, N? is a measure-valued random variable or simply a random measure. A Poisson process
? on ? with mean measure ?, denoted ? ? PoissonP(?), is defined to be a point process such
that N? (A) has a Poisson distribution with mean ?(A) and that for any disjoint measurable sets
A1 , . . . , An , N? (A1 ), . . . , N? (An ) are independent. The latter property is referred to as complete
randomness. Poisson processes are the only point process that satisfies this property [9]:
Theorem 1. A random point process ? on a regular measure space is a Poisson process if and only
if N? is completely random. If this is true, the mean measure is given by ?(A) = E(N? (A)).
Consider ?? ? PoissonP(?? ) on a product space ? ? R+ . For each realization of ?? , We define
?? : F? ? [0, +?] as
X
?? ,
w? ??
(1)
(?,w? )???
?
Intuitively, ? (A) sums up the values of w? with ? ? A. Note that ?? is also a completely random
measure (but not a point process in general), and is essentially a generalization of the compound
Poisson process. As a special case, if we choose ?? to be
?? = ? ? ?
with ?(dw) = w?1 e?w dw,
(2)
Then the random measure as defined in Eq.(1) is called a Gamma process with base measure ?,
denoted by G ? ?P(?). Normalizing any realization of G ? ?P(?) yields a sample of a Dirichlet
process, as
D , G/G(?) ? DP(?).
(3)
In conventional parameterization, ? is often decomposed into two parts: a base distribution p? ,
?/?(?), and a concentration parameter ?? , ?(?).
3
Construction of Dependent Dirichlet Processes
Motivated by the relationship between Poisson and Dirichlet processes, we develop a new approach
for constructing dependent Dirichlet processes (DDPs). Our approach can be described as follows:
given a collection of Dirichlet processes, one can apply operations that preserve the complete randomness of their underlying Poisson processes. This yields a new Poisson process (due to theorem 1)
and a related DP which depends on the source. In particular, we consider three such operations: superposition, subsampling, and point transition.
Superposition of Poisson processes: Combining a set of independent Poisson processes yields a
Poisson process whose mean measure is the sum of mean measures of the individual ones.
Theorem 2 (Superposition Theorem [9]). Let ?1 , . . . , ?m be independent Poisson processes on ?
with ?k ? PoissonP(?k ), then their union has
?1 ? ? ? ? ? ?m ? PoissonP(?1 + ? ? ? + ?m ).
(4)
Given a collection of independent Gamma processes G1 , . . . , Gm , where for each k = 1, . . . , m,
Gk ? ?P(?k ) with underlying Poisson process ??k ? PoissonP(?k ? ?). By theorem 2, we have
!
!
!
m
m
m
[
X
X
?
?k ? PoissonP
(?k ? ?) = PoissonP
?k ? ? .
(5)
k=1
k=1
k=1
2
Due to the relationship between Gamma processes and their underlying Poisson processes, such a
combination is equivalent to the direct superposition of the Gamma processes themselves, as
G0 := G1 + ? ? ? + Gm ? ?P(?1 + ? ? ? + ?m ).
(6)
Let Dk = Gk /Gk (?), and gk = Gk (?), then Dk is independent of gk , and thus
D0 := G0 /G0 (?) = (g1 D1 + ? ? ? + gm Dm )/(g1 + ? ? ? + gm ) = c1 D1 + ? ? ? + cm Dm . (7)
Pm
Here, ck = gk / l=1 gl , which has (c1 , . . . , cm ) ? Dir(?1 (?), . . . , ?m (?)). Consequently, one
can construct a Dirichlet process through a random convex combination of independent Dirichlet
processes. This result is summarized by the following theorem:
Theorem 3. Let D1 , . . . , Dm be independent Dirichlet processes on ? with Dk ? DP(?k ), and
(c1 , . . . , cm ) ? Dir(?1 (?), . . . , ?m (?)) be independent of D1 , . . . , Dm , then
D1 ? ? ? ? ? Dm := c1 D1 + ? ? ? cm Dm ? DP(?1 + ? ? ? + ?m ).
(8)
Here, we use theP
symbol ? to indicate superposition via a random convex combination. Let ?k =
m
?k (?) and ?0 = k=1 ?k , then for each measurable subset A,
E(D0 (A)) =
m
X
?k
k=1
?0
E(Dk (A)),
and Cov(D0 (A), Dk (A)) =
?k
Var(Dk (A)).
?0
(9)
Subsampling Poisson processes: Random subsampling of a Poisson process via independent
Bernoulli trials yields a new Poisson process.
Theorem 4 (Subsampling Theorem). Let ? ? PoissonP(?) be a Poisson process on the space ?,
and q : ? ? [0, 1] be a measurable function. If we independently draw z? ? {0, 1} for each ? ? ?0
with P(z? = 1) = q(?), and let ?k = {? ? ? : z? = k} for k = 0, 1, then ?0 and ?1 are
independent Poisson processes on ?, with ?0 ? PoissonP((1 ? q)?) and ?1 ? PoissonP(q?)1 .
We emphasize that subsampling is via independent Bernoulli trials rather than choosing a fixed
number of particles. We use Sq (?) := ?1 to denote the result of subsampling, where q is referred
to as the acceptance function. Note that subsampling the underlying
Poisson process of a Gamma
P?
process G is equivalent to subsampling the terms of G. Let G = i=1 wi ??i , and for each i, we
draw zi with P(zi = 1) = q(?i ). Then, we have
X
G0 = Sq (G) :=
wi ??i ? ?P(q?).
(10)
i:zi =1
Let D be a Dirichlet process given by D = G/G(?), then we can construct a new Dirichlet process D0 = G0 /G0 (?) by subsampling the terms of D and renormalizing their coefficients. This is
summarized by the following theorem.
Pn
Theorem 5. Let D ? DP(?) be represented by D = i=1 ri ??i and q : ? ? [0, 1] be a measurable function. For each i we independently draw zi with P(zi = 1) = q(?i ), then
X
D0 = Sq (D) :=
ri0 ??i ? DP(q?),
(11)
i:zi =1
where ri0 := ri /
P
j:zj =1 rj are the re-normalized coefficients for those i with zi = 1.
Let ? = ?(?) and ?0 = (q?)(?), then for each measurable subset A,
R
qd?
?0
(q?)(A)
0
= RA
, and Cov(D0 (A), D(A)) = Var(D0 (A)).
E(D (A)) =
(q?)(?)
?
qd?
?
(12)
Point transition of Poisson processes: The third operation moves each point independently following a probabilistic transition. Formally, a probabilistic transition is defined to be a function
T : ? ? F? ? [0, 1] such that for each ? ? F? , T (?, ?) is a probability measure on ? that describes
the distribution of where ? moves, and for each A ? F? , T (?, A) is integrable. T can be considered
as a transformation of measures over ?, as
Z
(T ?)(A) :=
T (?, A)?(d?).
(13)
?
1
q? is a measure on ? given by (q?)(A) =
R
A
qd?, or equivalently (q?)(d?) = q(?)?(d?).
3
Theorem 6 (Transition Theorem). Let ? ? PoissonP(?) and T be a probabilistic transition, then
T (?) := {T (?) : ? ? ?} ? PoissonP(T ?).
(14)
With a slight abuse of notation, we use T (?) to denote an independent sample from T (?, ?).
As a consequence, we can derive a Gamma process and thus a Dirichlet process by applying the
probabilistic transition to the location of each term, leading to the following:
P?
Theorem 7. Let D = i=1 ri ??i ? DP(?) be a Dirichlet process on ?, then
T (D) :=
?
X
ri ?T (?i ) ? DP(T ?).
(15)
i=1
Theorems 1 and 2 are immediate consequences of the results in [9]. We derive Theorems 3 to Theorem 7 independently as part of the proposed approach. Detailed explanation of relevant concepts
and the proofs of Theorem 2 to Theorem 7 are provided in the supplement.
3.1
A Markov Chain of Dirichlet Processes
Integrating these three operations, we construct a Markov chain of DPs formulated as
Dt = T (Sq (Dt?1 )) ? Ht ,
with Ht ? DP(?).
(16)
The model can be explained as follows: given Dt?1 , we choose a subset of terms by subsampling,
then move their locations via a probabilistic transition T , and finally superimpose a new DP Ht on
the resultant process to form Dt . Hence, creating new particles, removing existing particles, and
varying particle locations are all allowed, respectively, via superposition, subsampling, and point
transition. Note that while they are based on the operations of the underlying Poisson processes, due
to theorems 3, 5, and 7, we operate directly on the DPs, without the need of explicitly instantiating
the associated Poisson processes or Gamma processes. Let ?t be the base measure of Dt , then
?t = T (q?t?1 ) + ?.
(17)
Particularly, if the acceptance probability q is a constant, then ?t = q?t?1 + ?? . Here, ?t = ?t (?)
and ?? = ?(?) are the concentration parameters. One may hold ?t fixed over time by choosing
appropriate values for q and ?? . Furthermore, it can be shown that
Cov(Dt+n (A), Dt (A)) ? q n Var(Dt (A)).
(18)
The covariance with previous DPs decays exponentially when q < 1. This is often a desirable
property in practice. Moreover, we note that ? and q play different roles in controlling the process.
Generally, ? determines how frequently new terms appear; while q governs the life span of a term
which has a geometric distribution with mean (1 ? q)?1 .
We aim to use the Markov chain of DPs as a prior of evolving mixture models. This provides
a mechanism with which new component models can be brought in, existing components can be
removed, and the model parameters can vary smoothly over time.
4
Comparison with Related Work
In his pioneering work [12], MacEachern proposed the ?single-p DDP model?. It considers DDP
as a collection of stochastic processes, but does not provide a natural mechanism to change the
collection size over time. M?uller et al [14] formulated each DP as a weighted mixture of a common
DP and an independent DP. This formulation was extended by Dunson [6] in modeling latent trait
distributions. Zhu et al [24] presented the Time-sensitive DP, in which the contribution of each DP
decays exponentially. Teh et al [23] proposed the HDP where each child DP takes its parent DP as
the base measure. Ren [18] combines the weighted mixture formulation with HDP to construct the
dynamic HDP. In contrast to the model proposed here, a fundamental difference of these models is
that the marginal distribution at each node is generally not a DP.
Caron et al [4] developed a generalized Polya Urn scheme while Ahmed and Xing [1] developed the
recurrent Chinese Restaurant process (CRP). Both generalize the CRP to allow time-variation, while
4
retaining the property of being marginally DP. The motivation underlying these methods fundamentally differs from ours, leading to distinct differences in the sampling algorithm. In particular, [4]
supports innovation and deletion of particles, but does not support variation of locations. Moreover,
its deletion scheme is based on the distribution in history, but not on whether a component model
fits the new observation. While [1] does support innovation and point transition, there is no explicit
way to delete old particles. It can be considered a special case of the proposed framework in which
subsampling operation is not incorporated. We note that [1] is motivated from an algorithmic rather
than theoretical perspective.
Grifin and Steel [7] present the ?DDP based on the stick breaking construction [19], reordering
the stick breaking ratios for each time so as to obtain different distributions over the particles. This
work is further extended [8] to a generic stick breaking processes. Chung et al [5] propose a local DP
that generalizes ?DDP. Rather than reordering the stick breaking ratios, they regroup them locally
such that dependent DPs can be constructed over a general covariate space. Inference in these models requires sampling a series of auxiliary variables, considerably increasing computational costs.
Moreover, the local DP relies on a truncated approximation to devise the sampling scheme.
Recently, Rao and Teh [16] proposed the spatially normalized Gamma process. They construct a
universal Gamma process in an auxiliary space and obtain dependent DPs by normalizing it within
overlapped local regions. The theoretical foundation differs in that it does not exploit the relationship
between the Gamma and Poisson process which is at the heart of the proposed model. In [16], the
dependency is established through region overlapping; while in our work, this is accomplished by
explicitly transferring particles from one DP to another. In addition, this work does not support
location variation, as it relies on a universal particle pool that is fixed over time.
5
The Sampling Algorithm
We develop a Gibbs sampling procedure based on the construction of DDPs introduced above. The
key idea is to derive sampling steps by exploiting the fact that our construction maintains the property
of being marginally DP via connections to the underlying Poisson processes. Furthermore, the
derived procedure unifies distinct aspects (innovation, removal, and transition) of our model. Let
D ? DP(?) be a Dirichlet process on ?. Then given a set of samples ? ? D, in which ?i appears
ci times, we have D|? ? DP(? + c1 ??1 + ? ? ? + cn ??n ). Let D0 be a Dirichlet process depending
on D as in Eq.(16), ?0 = (q?)(?), and qi = q(?i ). Given ? ? D, we have
!
m
X
0
D |? ? DP ?? p? + ?0 pq? +
qk ck T (?k , ?) .
(19)
k=1
0
0
Sampling from D . Let ?1 ? D . Marginalizing over D0 , we get
m
m
k=1
k=1
X qk ck
X
??
?0
?1 |? ? 0 p? + 0 pq? +
T (?k , ?) with ?10 = ?? + ?0 +
qk ck .
0
?1
?1
?1
(20)
Thus we sample ?1 from three types of sources: the innovation distribution p? , the q-subsampled
base distribution pq? , and the transition distribution T (?k , ?). In doing so, we first sample a variable
u1 that indicates which source to sample from. Specifically, when u1 = ?1, u1 = 0, or u1 = l > 0,
we respectively sample ?1 from p? , pq? , or T (?l , ?). The probabilities of these cases are ?? /?10 ,
?0 /?10 , and qi ci /?10 respectively. After u1 is obtained, we then draw ?1 from the indicated source.
The next issue is how to update the posterior given ?1 and u1 . The answer depends on the value of
u1 . When u1 = ?1 or 0, ?1 is a new particle, and we have
!
m
X
0
D |?1 , {u1 ? 0} ? DP ?? p? + ?0 pq? +
qk ck T (?k , ?) + ??1 .
(21)
k=1
If u1 = l > 0, we know that the particle ?l is retained in the subsampling process (i.e. the corresponding Bernoulli trial outputs 1), and the transited version T (?l ) is determined to be ?1 . Hence,
?
?
X
D0 |?1 , {u1 = l > 0} ? DP ??? p? + ?0 pq? +
qk ck T (?k , ?) + (cl + 1)??1 ? .
(22)
k6=l
5
With this posterior distribution, we can subsequently draw the second sample and so on. This process
generalizes the Chinese restaurant process in several ways: (1) it allows either inheriting previous
particles or drawing new ones; (2) it uses qk to control the chance that we sample a previous particle;
(3) the transition T allows smooth variation when we inherit a previous particle.
Inference with Mixture Models. We use the Markov chain of DPs as the prior of evolving mixture
models. The generation process is formulated as
?1 , . . . , ?n ? D0 i.i.d.,
and
xi ? L(?i ), i = 1, . . . , n.
(23)
Here, L(?i ) is the observation model parameterized by ?i . According to the analysis above, we
derive an algorithm to sample ?1 , . . . , ?n conditioned on the observations x1 , . . . , xn as follows.
Initialization. (1) Let m
? denote the number of particles, which is initialized to be m and will
increase as we draw new particles from p? or pq? . (2) Let wk denote the prior weights of different
sampling sources which may also change during the sampling. Particularly, we set wk = qk ck for
k > 0, w?1 = ?? , and w0 = ?0 . (3) Let ?k denote the particles, whose value is decided when a
new particle or the transited version of a previous one is sampled. (4) The label li indicates to which
particle ?i corresponds and the counter rk records the number of times that ?k has been sampled
(set to 0 initially). (5) We compute the expected likelihood, as given by F (k, i) := Epk (f (xi |?)).
Here, f (xi |?) is the likelihood of xj with respect to the parameter ?, and pk is p? , pq? or T (?k , ?)
respectively when k = ?1, k = 0 and k ? 1.
Sequential Sampling. For each i = 1, . . . , n, we first draw the indicator ui with probability P(ui =
k) ? wk F (k, i). Depending on the value of ui , we sample ?i from different sources. For brevity,
let p|x to denote the posterior distribution derived from the prior distribution p conditioned on the
observation x. (1) If ui = ?1 or 0, we draw ?i from p? |xi or pq? |xi , respectively, and then add it
as a new particle. Concretely, we increase m
? by 1, let ?m
?
? = ?j , rm
? = wm
? = 1, and set li = m.
Moreover, we compute F (m, i) = f (xi |?m
? ) for each i. (2) Suppose ui = k > 0. If rk = 0 then it is
the first time we have drawn ui = k. Since ?k has not been determined, we sample ?i ? T (?k , ?)|xi ,
then set ?k = ?i . If rk > 0, the k-th particle has been sampled before. Thus, we can simply set
?i = ?k . In both cases, we set the label li = k, increase the weight wi and the counter ri by 1, and
update F (k, i) to f (xi |?k ) for each i.
Note that this procedure is inefficient in that it samples each particle ?k merely based on the first
observation with label k. Therefore, we use this procedure for bootstrapping, and then run a Gibbs
sampling scheme that iterates between parameter update and label update.
(Parameter update): We resample each particle ?k from its source distribution conditioned on all
samples with label k. In particular, for k ? [1, m] with rk > 0, we draw ?k ? T (?k , ?)|{xi : li =
k}, and for k ? [m + 1, m],
? we draw ?k ? p|{xi : li = k}, where p = pq? or p? , depending which
source ?k was initially sampled from. After updating ?k , we need to update F (k, i) accordingly.
(Label update): The label updating is similar to the bootstrapping procedure described above. The
only difference is that when we update a label from k to k 0 , we need to decrease the weight and
counter for k. If rk decreases to zero, we remove ?k , and reset wk to qk ck when k ? m.
At the end of each phase t, we sample ?k ? T (?k , ?) for each k with rk = 0. In addition, for each
such particle, we update the acceptance probability as qk ? qk ?q(?k ), which is the prior probability
that the particle ?k will survive in next phase. MATLAB code is available in the following website:
http://code.google.com/p/ddpinfer/.
6
Experimental Results
Here we present experimental results on both synthetic and real data. In the synthetic case, we
compare our method with dynamic FMM in modeling mixtures of Gaussians whose number and
centers evolve over time. For real data, we test the approach in modeling the motion of people in
crowded scenes and the trends of research topics reflected in index terms.
6.1
Simulations on Synthetic Data
The data for simulations were synthesized as follows. We initialized the model with two Gaussian
components, and added new components following a temporal Poisson process (one per 20 phases
6
0.2
0
0
5
10
20
30
40
50
60
70
80
0.8
q=0.1
q=0.9
q=1
0.15
0.1
0.05
var=0.0001
var=0.1
var=100
0.7
median distance
0.05
median distance
D?DPMM
D?FMM (K = 2)
D?FMM (K = 3)
D?FMM (K = 5)
0.1
actual # comp.
median distance
0.2
0.15
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
70
t
(a) Comparison with D-FMM
80
0
0
50
100
150
200
# samples/component
(b) For different acceptance prob.
0
0
50
100
150
200
# samples/component
(c) For different diffusion var.
Figure 1: The simulation results: (a) compares the performance between D-DPMM and D-FMM with differing
numbers of components. The upper graph shows the median of distance between the resulting clusters and the
ground truth at each phase. The lower graph shows the actual numbers of clusters. (b) shows the performance of
D-DPMM with different values of acceptance probability, under different data sizes. (c) shows the performance
of D-DPMM with different values of diffusion variance, under different data sizes.
on average). For each component, the life span has a geometric distribution with mean 40, the mean
evolves independently as a Brownian motion, and the variance is fixed to 1. We performed the
simulation for 80 phases, and at each phase, we drew 1000 samples for each active component. At
each phase, we sample for 5000 iterations, discarding the first 2000 for burn-in, and collecting a
sample every 100 iterations for performance evaluation. The particles of the last iteration at each
phase were incorporated into the model as a prior for sampling in the next phase. We obtained
the label for each observation by majority voting based on the collected samples, and evaluated
the performance by measuring the dissimilarity between the resultant clusters and the ground truth
using the variation of information [13] criterion. Under each parameter setting, we repeated the
experiment 20 times, utilizing the median of the dissimilarities for comparison.
We compare our approach (D-DPMM) with dynamic finite mixtures (D-FMM), which assumes a
fixed number of Gaussians whose centers vary as Brownian motion. From Figure 1(a), we observe
that when the fixed number K of components equals the actual number, they yield comparable performance; while when they are not equal, the errors of D-FMM substantially increase. Particularly,
K less than the actual number results in significant underfitting (e.g. D-FMM with K = 2 or 3 at
phases 30?50 and 66?76); when K is greater than the actual number, samples from the same component are divided into multiple groups and assigned to different components (e.g. D-FMM with
K = 5 at phases 1 ? 10 and 30 ? 50). In all cases, D-DPMM consistently outperforms D-FMM due
to its ability to adjust the number of components to adapt to the change of observations.
We also studied how design parameters impact performance. In Figure 1(b), we see that an acceptance probability q to 0.1 creates new components rather than inheriting from previous phases,
leading to poor performance when the number of samples is limited. If we set q = 0.9, the components in previous phases have a higher survival rate, resulting in more reliable estimation of the
component parameters from multiple phases. Figure 1(c) shows the effect of the diffusion variance
that controls the parameter variation. When it is small, the parameter in the next phase is tied tightly
with the previous value; when it is large, the estimation basically relies on new observations. Both
cases lead to performance degradation on small datasets, which indicates that it is important to maintain a balance between inheritance and innovation. Our framework provides the flexibility to attain
such a balance. Cross-validation can be used to set these parameters automatically.
6.2
Real Data Applications
Modeling People Flows. It was observed [11] that the majority of people walking in crowded areas
such as a rail station tend to follow motion flows. Typically, there are several flows at a time, and
each flow may last for a period. In this experiment, we apply our approach to extract the flows.
The test was conducted on video acquired in New York Grand Central Station, which comprises
90, 000 frames for one hour (25 fps). A low level tracker was used to obtain the tracks of people,
which were then processed by a rule-based filter that discards obviously incorrect tracks. We adopt
the flow model described in [11], which uses an affine field to capture the motion patterns of each
flow. The observation for this model is in the form of location-velocity pairs. We divided the entire
7
0
2
1
1
motion estimation, video sequences
4
2
2
pattern recognition, pattern clustering
6
3
3
statistical models, optimization problem
8
4
4
discriminant analysis, information theory
10
5
5
image segmentation, image matching
6
6
face recognition, biological
7
7
image representation, feature extraction
8
photometry, computational geometry
9
neural nets, decision theory
10
image registration, image color analysis
flow 1
12
index
index
0
14
8
16
9
18
10
20
0
10
20
30
40
50
60
time
11
flow 2
1990
1995
2000
2005
2010
time
(a) People flows
(b) PAMI topics
Figure 2: The experiment results on real data. (a) left: the timelines of the top 20 flows; right: illustration of
first two flows. (Illustrations of larger sizes are in the supplement.) (b) left: the timelines of the top 10 topics;
right: the two leading keywords for these topics. (A list with more keywords is in the supplement.)
sequence into 60 phases (each for one minute), extract location-velocity pairs from all tracks, and
randomly choose 3000 pairs for each phase for model inference. The algorithm infers 37 flows in
total, while at each phase, the numbers of active flows range from 10 to 18. Figure 2(a) shows the
timelines of the top 20 flows (in terms of the numbers of assigned observations). We compare the
performance of our method with D-FMM by measuring the average likelihood on a disjoint dataset.
The value for our method is ?3.34, while those for D-FMM are ?6.71, ?5.09, ?3.99, ?3.49, and
?3.34, when K are respectively set to 10, 20, 30, 40, and 50. Consequently, with a much smaller
number of components (12 active components on average), our method attains a similar modeling
accuracy as a D-FMM with 50 components.
Modeling Paper Topics. Next we analyze the evolution of paper topics for IEEE Trans. on PAMI.
By parsing the webpage of IEEE Xplore, we collected the index terms for 3014 papers published in
PAMI from Jan, 1990 to May, 2010. We first compute the similarity between each pair of papers
in terms of relative fraction of overlapped index terms. We derive a 12-dimensional feature vector
using spectral embedding [2] over the similarity matrix for each paper. We run our algorithm on
these features with each phase corresponding to a year. Each cluster of papers is deemed a topic.
We compute the histogram of index terms and sorted them in decreasing order of frequency for each
topic. Figure 2(b) shows the timelines of top 10 topics, and together with the top two index terms
for each of them. Not surprisingly, we see that topics such as ?neural networks? arise early and then
diminish while ?image segmentation? and ?motion estimation? persist.
7
Conclusion and Future Directions
We developed a principled framework for constructing dependent Dirichlet processes. In contrast to
most DP-based approaches, our construction is motivated by the intrinsic relation between Dirichlet
processes and compound Poisson processes. In particular, we discussed three operations: superposition, subsampling, and point transition, which produce DPs depending on others. We further
combined these operations to derive a Markov chain of DPs, leading to a prior of mixture models
that allows creation, removal, and location variation of component models under a unified formulation. We also presented a Gibbs sampling algorithm for inferring the models. The simulations on
synthetic data and the experiments on modeling people flows and paper topics clearly demonstrate
that the proposed method is effective in estimating mixture models that evolve over time.
This framework can be further extended along different directions. The fact that each completely
random point process is a Poisson process suggests that any operation that preserves the complete
randomness can be applied to obtain dependent Poisson processes, and thus dependent DPs. Such
operations are definitely not restricted to the three ones discussed in this paper. For example, random
merging and random splitting of particles also possess this property, which would lead to an extended
framework that allows merging and splitting of component models. Furthermore, while we focused
on Markov chain in this paper, the framework can be straightforwardly generalized to any acyclic
network of DPs. It is also interesting to study how it can be generalized to the case with undirected
network or even continuous covariate space. We believe that as a starting point, this paper would
stimulate further efforts to exploit the relation between Poisson processes and Dirichlet processes.
8
References
[1] A. Ahmed and E. Xing. Dynamic Non-Parametric Mixture Models and The Recurrent Chinese Restaurant
Process : with Applications to Evolutionary Clustering. In Proc. of SDM?08, 2008.
[2] F. R. Bach and M. I. Jordan. Learning spectral clustering. In Proc. of NIPS?03, 2003.
[3] J. Boyd-Graber and D. M. Blei. Syntactic Topic Models. In Proc. of NIPS?08, 2008.
[4] F. Caron, M. Davy, and A. Doucet. Generalized Polya Urn for Time-varying Dirichlet Process Mixtures.
In Proc. of UAI?07, number 6, 2007.
[5] Y. Chung and D. B. Dunson. The local Dirichlet Process. Annals of the Inst. of Stat. Math., (October
2007), January 2009.
[6] D. B. Dunson. Bayesian Dynamic Modeling of Latent Trait Distributions. Biostatistics, 7(4), October
2006.
[7] J. E. Griffin and M. F. J. Steel. Order-Based Dependent Dirichlet Processes. Journal of the American
Statistical Association, 101(473):179?194, March 2006.
[8] J. E. Griffin and M. F. J. Steel. Time-Dependent Stick-Breaking Processes. Technical report, 2009.
[9] J. F. C. Kingman. Poisson Processes. Oxford University Press, 1993.
[10] J. J. Kivinen, E. B. Sudderth, and M. I. Jordan. Learning Multiscale Representations of Natural Scenes
Using Dirichlet Processes. In Proc. of ICCV?07, 2007.
[11] D. Lin, E. Grimson, and J. Fisher. Learning Visual Flows: A Lie Algebraic Approach. In Proc. of
CVPR?09, 2009.
[12] S. N. MacEachern. Dependent Nonparametric Processes. In Proceedings of the Section on Bayesian
Statistical Science, 1999.
[13] M. Meila. Comparing clusterings - An Axiomatic View. In Proc. of ICML?05, 2005.
[14] P. Muller, F. Quintana, and G. Rosner. A Method for Combining Inference across Related Nonparametric
Bayesian Models. J. R. Statist. Soc. B, 66(3):735?749, August 2004.
[15] R. M. Neal. Markov Chain Sampling Methods for Dirichlet Process Mixture Models. Journal of computational and graphical statistics, 9(2):249?265, 2000.
[16] V. Rao and Y. W. Teh. Spatial Normalized Gamma Processes. In Proc. of NIPS?09, 2009.
[17] C. E. Rasmussen. The Infinite Gaussian Mixture Model. In Proc. of NIPS?00, 2000.
[18] L. Ren, D. B. Dunson, and L. Carin. The Dynamic Hierarchical Dirichlet Process. In Proc. of ICML?08,
New York, New York, USA, 2008. ACM Press.
[19] J. Sethuraman. A Constructive Definition of Dirichlet Priors. Statistica Sinica, 4(2):639?650, 1994.
[20] K.-a. Sohn and E. Xing. Hidden Markov Dirichlet process: modeling genetic recombination in open
ancestral space. In Proc. of NIPS?07, 2007.
[21] N. Srebro and S. Roweis. Time-Varying Topic Models using Dependent Dirichlet Processes, 2005.
[22] Y. W. Teh. Dirichlet Process, 2007.
[23] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet Processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[24] X. Zhu and J. Lafferty. Time-Sensitive Dirichlet Process Mixture Models, 2005.
9
| 4151 |@word trial:3 version:2 briefly:1 open:1 simulation:5 covariance:1 series:1 genetic:1 document:2 ours:1 outperforms:1 existing:3 com:1 comparing:1 yet:1 parsing:1 john:1 remove:1 update:9 website:1 parameterization:1 accordingly:1 record:1 blei:2 provides:3 iterates:1 node:1 location:10 math:1 welg:1 along:1 constructed:1 direct:1 fps:1 incorrect:1 combine:1 underfitting:1 acquired:1 expected:1 ra:1 themselves:1 frequently:1 fmm:14 decomposed:1 decreasing:1 automatically:1 actual:5 increasing:1 provided:1 estimating:2 underlying:8 notation:1 moreover:4 biostatistics:1 cm:4 substantially:1 developed:4 differing:1 unified:1 transformation:1 bootstrapping:2 temporal:1 every:1 collecting:1 voting:1 rm:1 stick:6 control:2 appear:1 before:1 local:4 consequence:2 oxford:1 abuse:1 pami:3 burn:1 initialization:1 studied:1 dynamically:1 suggests:1 limited:1 regroup:1 range:1 decided:1 union:1 practice:1 differs:2 sq:4 procedure:5 jan:1 area:1 empirical:1 evolving:3 universal:2 attain:1 matching:1 boyd:1 word:3 integrating:1 regular:1 davy:1 get:1 applying:2 measurable:6 conventional:1 equivalent:2 center:2 starting:1 independently:6 convex:2 focused:1 splitting:2 fade:1 rule:1 utilizing:1 his:1 dw:2 embedding:1 notion:1 variation:9 limiting:1 annals:1 construction:10 gm:4 play:1 controlling:1 suppose:1 us:2 overlapped:2 trend:1 velocity:2 recognition:2 particularly:3 updating:2 walking:1 persist:1 observed:1 role:1 capture:1 region:2 counter:3 removed:1 decrease:2 grimson:2 principled:1 ui:6 dhlin:1 dynamic:6 creation:3 upon:1 creates:1 eric:1 completely:3 represented:1 distinct:2 effective:2 choosing:2 whose:4 larger:1 valued:1 cvpr:1 drawing:1 ability:1 cov:3 statistic:1 g1:4 syntactic:1 obviously:1 beal:1 advantage:1 sequence:2 sdm:1 net:1 propose:2 product:1 reset:1 relevant:2 dpms:2 combining:3 realization:3 flexibility:1 roweis:1 exploiting:1 parent:2 cluster:4 webpage:1 produce:1 renormalizing:1 derive:7 depending:5 develop:3 stat:1 recurrent:2 keywords:2 polya:2 eq:2 soc:1 auxiliary:2 come:2 indicate:1 qd:3 direction:2 filter:1 stochastic:1 subsequently:1 generalization:2 biological:1 hold:1 tracker:1 considered:2 ground:2 diminish:1 algorithmic:1 vary:2 adopt:1 early:1 resample:1 estimation:5 proc:11 axiomatic:1 label:9 superposition:8 sensitive:2 create:1 weighted:3 uller:1 mit:6 brought:1 clearly:1 gaussian:2 aim:1 ck:8 rather:4 pn:1 varying:6 derived:2 consistently:1 bernoulli:3 indicates:3 likelihood:3 contrast:2 attains:1 inst:1 inference:6 dependent:18 typically:1 transferring:1 entire:1 initially:2 hidden:1 relation:2 issue:1 denoted:2 k6:1 retaining:1 spatial:1 special:2 marginal:1 equal:2 construct:6 field:1 extraction:1 sampling:15 survive:1 icml:2 carin:1 future:1 others:1 report:1 fundamentally:2 inherent:1 randomly:1 gamma:15 preserve:3 tightly:1 individual:1 subsampled:1 phase:19 geometry:1 maintain:1 acceptance:6 xplore:1 evaluation:1 adjust:1 mixture:20 chain:10 old:2 initialized:2 re:1 quintana:1 theoretical:2 delete:1 modeling:11 rao:2 measuring:2 cost:1 subset:3 successful:1 conducted:1 straightforwardly:1 dependency:1 answer:1 dir:2 synthetic:5 considerably:1 combined:1 fundamental:1 grand:1 definitely:1 csail:5 ancestral:1 probabilistic:5 pool:1 together:1 central:1 choose:3 creating:2 american:2 chung:2 leading:6 kingman:2 inefficient:1 li:5 summarized:2 wk:4 coefficient:2 crowded:2 explicitly:2 depends:2 performed:1 view:1 doing:1 analyze:1 xing:3 wm:1 maintains:1 contribution:1 accuracy:1 qk:10 variance:3 yield:5 generalize:1 bayesian:4 unifies:1 basically:1 marginally:4 ren:2 epk:1 comp:1 published:1 randomness:5 history:1 definition:1 frequency:1 dm:6 resultant:2 proof:1 associated:1 sampled:4 dataset:1 color:1 infers:1 segmentation:2 appears:1 higher:1 dt:8 follow:1 reflected:1 formulation:3 evaluated:1 furthermore:3 crp:2 multiscale:1 overlapping:1 google:1 indicated:1 stimulate:1 believe:1 usa:1 effect:1 normalized:4 concept:3 true:1 evolution:1 hence:3 assigned:2 spatially:1 neal:1 during:1 uniquely:1 criterion:1 generalized:5 complete:5 demonstrate:2 motion:7 image:6 novel:1 recently:1 common:2 exponentially:2 discussed:2 slight:1 association:2 dahua:1 trait:2 synthesized:1 significant:1 caron:2 gibbs:4 meila:1 pm:1 particle:28 pq:10 similarity:2 base:5 add:1 posterior:3 brownian:2 perspective:1 discard:1 compound:4 life:2 poissonp:12 accomplished:1 devise:1 muller:1 integrable:1 greater:1 period:1 relates:1 multiple:3 desirable:1 rj:1 d0:11 smooth:1 technical:1 characterized:1 ahmed:2 adapt:1 cross:1 lin:2 bach:1 divided:2 a1:2 qi:2 instantiating:1 impact:1 essentially:2 poisson:40 rosner:1 iteration:3 histogram:1 c1:5 addition:2 median:5 source:8 sudderth:1 rest:1 operate:1 posse:1 tend:1 elegant:1 undirected:1 dpm:1 flow:17 lafferty:1 jordan:3 counting:1 variety:2 xj:1 restaurant:4 zi:7 fit:1 idea:2 cn:1 whether:1 motivated:4 effort:2 algebraic:1 york:3 matlab:1 cornerstone:1 generally:2 detailed:3 governs:1 nonparametric:3 locally:1 statist:1 processed:1 sohn:1 http:1 zj:1 disjoint:2 hdps:1 per:1 track:3 group:1 key:2 indefinite:1 drawn:1 changing:1 registration:1 ht:3 diffusion:3 graph:2 merely:1 fraction:1 sum:2 year:1 run:2 prob:1 parameterized:1 draw:10 decision:1 griffin:2 comparable:1 ddp:6 ri:5 scene:2 aspect:2 u1:11 span:2 urn:2 ri0:2 according:1 combination:3 poor:1 march:1 describes:1 smaller:1 across:1 wi:3 evolves:1 intuitively:1 explained:1 restricted:1 iccv:1 heart:1 remains:1 discus:1 mechanism:2 know:1 end:1 generalizes:2 operation:13 available:1 gaussians:2 apply:2 observe:1 hierarchical:3 appropriate:2 generic:1 spectral:2 assumes:2 dirichlet:43 subsampling:15 clustering:4 top:5 graphical:1 maintaining:1 exploit:3 recombination:1 chinese:4 move:3 g0:6 added:1 parametric:1 concentration:2 traditional:1 evolutionary:1 dp:53 distance:4 majority:2 w0:1 topic:22 considers:1 collected:2 discriminant:1 hdp:3 code:2 retained:1 relationship:5 index:7 providing:1 ratio:2 balance:2 innovation:5 equivalently:1 illustration:2 dunson:4 october:2 sinica:1 gk:7 steel:3 design:1 dpmm:6 teh:5 upper:1 observation:11 markov:11 datasets:1 finite:2 truncated:1 immediate:1 january:1 extended:4 incorporated:2 frame:1 station:2 august:1 overcoming:1 superimpose:1 introduced:2 pair:4 connection:2 photometry:1 deletion:2 established:1 hour:1 timeline:4 nip:5 trans:1 address:1 pattern:3 pioneering:1 reliable:1 explanation:1 video:2 suitable:1 natural:3 indicator:1 kivinen:1 zhu:2 scheme:4 temporally:1 sethuraman:1 deemed:1 extract:2 prior:10 review:1 geometric:2 removal:4 inheritance:1 evolve:3 marginalizing:1 relative:1 reordering:2 generation:1 limitation:1 interesting:1 acyclic:1 var:7 srebro:1 validation:1 foundation:1 affine:1 gl:1 last:2 surprisingly:1 rasmussen:1 allow:2 wide:1 taking:1 face:1 emerge:1 distributed:2 xn:1 transition:15 concretely:1 collection:4 emphasize:1 implicitly:1 doucet:1 active:3 uai:1 xi:10 thep:1 continuous:1 latent:2 additionally:1 inherently:1 cl:1 constructing:4 inheriting:2 inherit:1 pk:1 statistica:1 motivation:1 arise:1 allowed:1 child:1 repeated:1 graber:1 x1:1 referred:2 inferring:1 comprises:1 explicit:1 lie:1 tied:1 breaking:6 rail:1 third:1 removing:2 theorem:20 rk:6 minute:1 specific:1 covariate:2 discarding:1 symbol:1 list:1 dk:6 decay:2 normalizing:2 survival:1 intrinsic:3 sequential:1 merging:2 drew:1 ci:2 supplement:3 dissimilarity:2 conditioned:3 ddps:2 smoothly:1 simply:2 visual:1 partially:1 corresponds:2 truth:2 satisfies:1 determines:1 relies:3 chance:1 acm:1 sorted:1 formulated:3 consequently:3 exposition:1 fisher:3 change:4 specifically:1 determined:2 infinite:1 degradation:1 called:1 total:1 experimental:2 formally:1 maceachern:3 support:4 latter:1 people:6 brevity:1 constructive:1 d1:6 |
3,481 | 4,152 | Joint Analysis of Time-Evolving Binary Matrices
and Associated Documents
1
Eric Wang, 1 Dehong Liu, 1 Jorge Silva, 2 David Dunson and 1 Lawrence Carin
1
Electrical and Computer Engineering Department, Duke University
2
Statistics Department, Duke University
{eric.wang,dehong.liu,jg.silva,lawrence.carin}@duke.edu
[email protected]
Abstract
We consider problems for which one has incomplete binary matrices that evolve
with time (e.g., the votes of legislators on particular legislation, with each year
characterized by a different such matrix). An objective of such analysis is to infer
structure and inter-relationships underlying the matrices, here defined by latent
features associated with each axis of the matrix. In addition, it is assumed that
documents are available for the entities associated with at least one of the matrix axes. By jointly analyzing the matrices and documents, one may be used
to inform the other within the analysis, and the model offers the opportunity to
predict matrix values (e.g., votes) based only on an associated document (e.g.,
legislation). The research presented here merges two areas of machine-learning
that have previously been investigated separately: incomplete-matrix analysis and
topic modeling. The analysis is performed from a Bayesian perspective, with efficient inference constituted via Gibbs sampling. The framework is demonstrated
by considering all voting data and available documents (legislation) during the
220-year lifetime of the United States Senate and House of Representatives.
1
Introduction
There has been significant recent research on the analysis of incomplete matrices [10, 15, 1, 12,
13, 18]. Most analyses have been performed under the assumption that the matrix is real. There
are interesting problems for which the matrices may be binary; for example, reflecting the presence/absence of links on nodes of a graph, or for analysis of data associated with a series of binary
questions. One may connect an underlying real matrix to binary (or, more generally, integer) observations via a probit or logistic link function; for example, such analysis has been performed in the
context of analyzing legislative roll-call data [6]. A problem that has received less attention concerns
the analysis of time-evolving matrices. The specific motivation of this paper involves binary questions in a legislative setting; we are interested in analyzing such data over many legislative sessions,
and since the legislators change over time, it is undesirable to treat the entire set of votes as a single
matrix. Each piece of legislation (question) is unique, but it is desirable to infer inter-relationships
and commonalities over time. Similar latent groupings and relationships exist for the legislators.
This general setting is also of interest for analysis of more-general social networks [8].
A distinct line of research has focused on analysis of documents, with topic modeling constituting a
popular framework [4, 2, 17, 3, 11]. Although the analysis of matrices and documents has heretofore
been performed independently, there are many problems for which documents and matrices may be
coupled. For example, in addition to a matrix of links between websites or email sender/recipient
data, one also has access to the associated documents (website and email content). By analyzing the
matrices and documents simultaneously, one may infer inter-relationships about each. For example,
in a factor-based model of matrices [8], the associated documents may be used to relate matrix
factors to topics/words, providing insight from the documents about the matrix, and vice versa.
1
To the authors? knowledge, this paper represents the first joint analysis of time-evolving matrices and
associated documents. The analysis is performed using nonparametric Bayesian tools; for example,
the truncated Dirichlet process [7] is used to jointly cluster latent topics and matrix features. The
framework is demonstrated through analysis of large-scale data sets. Specifically, we consider binary
vote matrices from the United States Senate and House of Representatives, from the first congress
in 1789 to the present. Documents of the legislation are available for the most recent 20 years, and
those are also analyzed jointly with the matrix data. The quantitative predictive performance of this
framework is demonstrated, as is the power of this setting for making qualitative assessments of
large-scale and complex joint matrix-document data.
2
Modeling Framework
2.1 Time-evolving binary matrices
(t)
(t)
Assume we are given a set of binary matrices, {Bt }t=1,? , with Bt ? {0, 1}Ny ?Nx . The number
(t)
(t)
of rows and columns, respectively Ny and Nx , may vary with time. For example, for the legislative roll-call data consider below, time index t corresponds to year and the number of pieces of
legislation and legislators changes with time (e.g., for the historical data considered for the United
States congress, the number of states and hence legislators changes as the country has grown).
Using a modeling framework analogous to that in [6], the binary matrix has a probit-model generative process, with Bt (i, j) = 1 if Xt (i, j) > 0, and Bt (i, j) = 0 otherwise, and the latent real
matrix is defined as
(t)
(t)
(t)
(t)
(t)
Xt (i, j) =< yi , xj > +?i + ?j + i,j
(1)
(t)
where < ?, ? > denotes a vector inner product, and i,j ? N (0, 1). The random effects are drawn
(t)
?i
(t)
?j
? N (0, ??1
? N (0, ??1
? ), with ?? ? ?? ?? + (1 ? ?? )Gamma(a, b) and ?? ?
? ) and
?? ?? + (1 ? ?? )Gamma(a, b); ?? is a point measure at infinity, corresponding to there not being
an associated random effect. The probability of whether there is a random effect is controlled by ??
and ?? , each of which is drawn from a beta distribution.
Random effect ?j is motivated by our example application, for which the index j denotes a specific
piece of legislation that is voted upon; this parameter reflects the ?difficulty? of the vote, and if |?j |
(t)
is large, then all people are likely to vote one way or the other (an ?easy? vote), while if ?j is small
(t)
(t)
the details of the legislator (defined by yi ) and legislation (defined by xj ) strongly impact the
vote. In previous political science Bayesian analysis [6] researchers have simply set ?? = 1 and
?? = 0, but here we consider the model in a more-general setting, and infer these relationships.
(t)
(t)
Additionally, in previous Bayesian analysis [6] the dimensionality of yi and xj has been set
(usually to one or two). In related probabilistic matrix factorization (PMF) applied to real matrices [15, 12], priors/regularizers are employed to constrain the dimensionality of the latent features. Here we employ the sparse binary vector b ? {0, 1}K , with bk ? Bernoulli(?k ), and
?k ? Beta(c/K, d(K ? 1)/K), for K set to a large integer. By setting c and d appropriately,
this favors that most of the components of b are zero (imposes sparseness). Specifically, by integrating out the {?k }k=1,K , one may readily show that the number of non-zero components in b is a
random variable drawn from Binomial(K, c/(c + d(K ? 1))), and the expected number of ones in
b is cK/[c + d(K ? 1)]. This is related to a draw from a truncated beta-Bernoulli process [16].
We consider two types of matrix axes. Specifically, we assume that each row corresponds to a
person/entity that may be present for matrix t + 1 and matrix t. It is assumed here that each column
corresponds to a question (in the examples, a piece of legislation), and each question is unique. Since
(t)
(t)
(t)
?j , x
? j ? N (0, ?x?1 IK ), ?x ? Gamma(e, f ),
the columns are each unique, we assume xj = b ? x
where ? denotes the pointwise/Hadamard vector product. If the person/entity associated with the
ith row at time t is introduced for the first time, its associated feature vector is similarly drawn
(t)
(t)
(t)
(t)
yi = b ? y?i , y?i ? N (0, ?y?1 IK ), with ?y ? Gamma(e, f ). However, assuming yi is
already drawn (person/entity i is active prior to time t + 1), then a simple auto-regressive model is
(t+1)
(t+1)
(t+1)
(t+1)
(t)
used to draw yi
: yi
= b ? y?i
, y?i
? N (y?i , ? ?1 IK ), with ? ? Gamma(g, h). The
prior on ? is set to favor small/smooth changes in the features of an individual on consecutive years.
This model constitutes a relatively direct extension of existing techniques for real matrices [15, 12].
Specifically, we have introduced a probit link function and a simple auto-regression construction to
2
impose statistical correlation in the traits of a person/entity at consecutive times. The introduction
of the random effects ?j and ?i has also not been considered within much of the machine-learning
matrix-analysis literature, but the use of ?j is standard in political science Bayesian models [6]. The
principal modeling contribution of this paper concerns how one may integrate such a time-evolving
binary-matrix model with associated documents.
2.2 Topic model
The manner in which the topic modeling is performed is a generalization of latent Dirichlet allocation (LDA) [4]. Assume that the documents of interest have words drawn from a vocabulary
V = {w1 , . . . , wV }. The kth topic is characterized by a distribution pk on words (?bag-of-words?
assumption), where pk ? Dir(?V /V, . . . , ?V /V ). The generative model draws {pk }k=1,T once
for each of the T possible topics.
Each document is characterized by a probability distribution on topics, where the cl ?
Dir(?T /T, . . . , ?T /T ) corresponds to the distribution across T topics for document l. The generative process for drawing words for document l is to first (and once) draw cl for document l. For
word i in document l, we draw a topic zil ? Mult(cl ), and then the specific word is drawn from a
multinomial with probability vector pzil .
The above procedure is like the standard LDA [4], with the difference manifested in how we handle
the Dirichlet distributions Dir(?V /V, . . . , ?V /V ) and Dir(?T /T, . . . , ?T /T ). The Dirichlet distribution draws are constituted via Sethuraman?s construction [14]; this allows us to place gamma
priors on ?V and ?T , while retaining conjugacy, permitting analytic Gibbs? sampling (we therefore get a full posterior distribution for all model parameters, while most LDA implementations
employ a point estimate for the document-dependent probabilities of topics). Specifically, the following hierarchical construction is used for draws from Dir(?V /V, . . . , ?V /V ) (and similarly for
Dir(?T /T, . . . , ?T /T )):
pk =
?
X
ah ??h , ah = Uh
h=1
Y
(1 ? Un ) , Uh ? Beta(1, ?V ) , ?h ?
n<h
V
X
1
?w
V
w=1
(2)
The probability mass ah is associated with component ?h ? {1, . . . , V } of the probability vector. The infinite sum is truncated, analogous to the truncated stick-breaking representation of the
Dirichlet process [9].
2.3 Joint analysis of matrices and documents
Section 2.1 discusses how we model time-evolving binary matrices, and Section 2.2 describes our
procedure for implementing topic models. We now put these two models together. Specifically,
(t)
we consider the case for which there is a document Dj of words associated with the jth column
at time t; in our example below, this will correspond to the jth piece of legislation in year t. It is
possible that we may have documents associated with the matrix rows as well (e.g., speeches for the
ith legislature), but in our model development (and in our examples), documents are only assumed
present for the columns.
(t)
For column j at time t, we have both a feature vector xj (for the matrix) and a distribution on
(t)
(t)
topics cj (for the document Dj ), and these are now coupled; the remainder of the matrix and
?
topic models are unchanged. We define a set of atoms {c?m , ??m , ?m
}m=1,M . The atoms ??m are
?1
?
drawn from N (0, ?x IK ), again with a gamma prior placed on ?x , and ?m
are also drawn from a
?
gamma distribution; the cm are drawn iid from Dir(?T /T, . . . , ?T /T ), using the Dirichlet distribu(t) (t)
tion construction as above. To couple the pair (xj , cj ), we draw indicator variable ujt as
ujt ?
M
X
m=1
bm ?m , bm = Cm
Y
(1 ? Ci ) , Cm ? Beta(1, ?)
(3)
i<m
(t)
(t)
with a gamma prior again placed on ? (with CM = 1). The pair (xj , cj ) is now defined by
(t)
(t)
(t)
(t)
? j , with x
? j ? N (??ujt , ?u?jt ?1 IK ). Further, cj is set to c?ujt .
xj = b ? x
This construction clusters the columns, with the clustering mechanism defined by a truncated stick?
breaking representation of the Dirichlet process [9]. The components {??m , ?m
}m=1,M define a
3
i = 1,?,
k = 1,?, K
j = 1,?,
t = 1,?, T
m = 1,?,M
Figure 1: Graphical representation of the model, with the hyperparameters omitted for simplicity. The plates
indicate replication, and the filled circle around Bt indicates it is observed.
Gaussian mixture model (GMM) in matrix-column feature space, while the {c?m }m=1,M define a
set of M probability vectors over topics, with one such vector associated with each of the aforementioned GMM mixture components. The truncated Dirichlet process infers how many mixture
components are needed to represent the data.
In this construction, each of the matrix columns is associated with a distribution on topics (based
upon which mixture component it is drawn from). This provides powerful interpretative insights
between the latent features in the matrix model and the words from the associated documents. Further, since the topic and matrix models are constituted jointly, the topics themselves are defined as
to be best matched to the characteristics of the matrix (vis-a-vis simply modeling the documents in
isolation, which may yield topics that are not necessarily well connected to what matters for the
matrices). A graphical representation of the model is shown in Figure 1.
There are several extensions one may consider in future work. For example, for simplicity the GMM
in column feature space is assumed time-independent. One may consider having a separate GMM
for each time (year) t. Further, we have not explicitly imposed time-dependence in the topic model
itself, and this may also be considered [2, 11]. For the examples presented below on real data,
despite these simplifications, the model seems to perform well.
2.4 Computations
The posterior distribution of all model parameters has been computed using Gibbs sampling; the
detailed update equations are provided as supplemental material at http://sites.google.
com/site/matrixtopics/. The first 1000 Gibbs iterations were discarded as burn-in followed
by 500 collection iterations.The truncation levels on the model are T = 20, M = 10, K = 30, and
the number of words in the vocabulary is V = 5249. Hyperparameters were set as a = b = e =
f = 10?6 , c = d = 1, g = 103 , and h = 10?3 . None of these parameters have been optimized, and
?reasonable? related settings yield very similar results.
We have performed joint matrix and text analysis considering the United States Congress voting
records (and, when available, the document associated with the legislation); we consider both the
House of Representatives (House) and Senate, from 1789-2008. Legislation documents and metadata (bill sponsorship, party affiliation of voters, etc.) are available for sessions 101?110 (19892008). For the legislation, stop words were removed using a common stopword list (the 514 stop
words are posted at http://sites.google.com/site/matrixtopics/, and the corpus
was stemmed using a Porter stemmer). These data are available from www.govtrack.us and
from the Library of Congress thomas.loc.gov (votes, text and metadata), while the votes dating from 1789 are at voteview.com. A binary matrix is manifested by mapping all ?affirmative?
vote codes (e.g., ?Yea?, ?Yes?, ?Present?) to one, and ?negative? codes (e.g., ?Nay?,?No?,?Not
Present?) to zero. Not all legislatures are present to vote on a given piece of legislation, and therefore missing data are manifested naturally. It varies from year to year, but typically 4% of the votes
are missing in a given year.
We implemented our proposed model in non-optimized Matlab. Computations were performed on
a PC with a 3.6GHz CPU and 4GB memory. A total of 11.5 hours of CPU time are required for
4
analysis of Senate sessions 101-110 (1989-2008), and 34.6 hours for House sessions 101-110; in
both cases, this corresponds to joint analysis of both votes and text (legislation). If we only analyze
the votes, 15.5 hours of CPU are required for Senate session 1-110 (1789-2008), and 62.1 hours for
House 1-110 respectively (the number of legislators in the House is over four times larger than that
for the Senate).
3
Experiments
3.1 Joint analysis of documents and votes
We first consider the joint analysis of the legislation (documents) and votes in the Senate, for 19892008. A key aspect of this analysis is the clustering of the legislation, with legislation j at time t
mapped to a cluster (mixture component), with each mixture component characterized by a distri(t)
(t)
bution across latent topics cj , and a latent feature xj for the associated matrix analysis (recall
Section 2.3). Five dominant clusters were inferred for these data. Since we are running a Gibbs
sampler, and the cluster index changes in general between consecutive iterations (because the index
is exchangeable), below we illustrate the nature of the clusters based upon the last Gibbs iteration.
The dimensionality of the features was inferred to be kbk0 = 5 (on average, across the Gibbs
(t)
collection), but two dimensions dominated for the legislation feature vectors xj . In Figure 2 we
present the inferred distributions of the five principal mixture components (clusters). The cluster
index and the indices of the features are arbitrary; we, for example, number the clusters from 1 to 5
for illustrative simplicity.
In Figure 2 we depict the distribution of topics c?m associated with each of the five clusters, and
in Figure 3 we list the ten most probable words associated with each of the topics. By examining
the topic characteristics in Figure 3, and the cluster-dependent distribution of topics, we may assign
(t)
words/topics to the latent features xj that are linked to the associated matrix, and hence to the vote
itself. For example, clusters 1 and 4, which are the most separated in latent space (top row in Figure
2), share a very similar support over topics (bottom row in Figure 2). These clusters appear to be associated with highly partisan topics, specifically taxes (topics 11 and 15) and health/Medicare/Social
Security (topics 12 and 16), as can be seen by considering the topic-dependent words in Figure 3.
Based upon the voting data and the party of the legislation sponsor (bill author), cluster 1 (red)
appears to represent a Republican viewpoint on these topics, while cluster 4 (blue) appears to represent a Democratic viewpoint. This distinction will play an important role in predicting the votes on
legislation based on the documents, as discussed below in Section 3.2.
In Figure 4 (last plot) we present the estimated density functions for the random-effect parame(t)
(t)
ters ?i and ?j (estimated from the Gibbs collection iterations). Note that p(?) is much more
tightly concentrated around zero than p(?). In the political science literature [6] (in which the legislation/documents have not been considered), researchers simply just set ? = 0, and therefore only
assume random effects on the legislation, but not on the senators/congressman. Our analysis appears
to confirm that this simplification is reasonable.
3.2 Matrix prediction based on documents
There has been significant recent interest in the analysis of matrices, particularly in predicting matrix
entries that are missing at random [10, 15, 1, 12, 13, 18]. In such collaborative-filtering research, the
views of a subset of individuals on a movie, for example, help inform predictions on ratings of people
who have not seen the movie (but a fraction of the people must have seen every movie). However,
in the problem considered here, these previous models are not applicable: prediction of votes on a
new legislation LN requires one to relate LN to votes on previous legislation L1 , . . . , LN ?1 , but in
the absence of any prior votes on LN ; this corresponds to estimating an entire column of the vote
matrix). The joint analysis of text (legislation) and votes, however, offers the ability to relate LN
to L1 , . . . , LN ?1 , by making connections via the underlying topics of the legislation (documents),
even in the absence of any votes for LN .
To examine this predictive potential, we performed joint analysis on all votes and legislation (documents) in the US Senate from 1989-2007. Through this process, we yielded a model very similar
to that summarized in Figures 2-4. Using this model, we predict votes on new legislation in 2008,
based on the documents of the associated legislation (but using no vote information on this new
legislation). To do this, the mixture of topics learned from 1989-2007 data are assumed fixed (each
topic characterized by a distribution over words), and these fixed topics are used in the analysis of
5
101st Congress
6
4
Latent Dimension 2
Latent Dimension 2
4
2
0
-2
-4
-2
0
2
Latent Dimension 1
Cluster 1
Cluster 2
-2
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0.1
20
0
5
-2
0
2
Latent Dimension 1
Cluster 3
0.2
10 15
Topic
0
-6
-4
4
0.3
5
2
-4
-6
-4
0
110th Congress
6
10 15
Topic
20
0
4
Cluster 4
Cluster 5
0.3
0.4
0.2
0.2
0.1
5
10 15
Topic
20
0
5
10 15
Topic
20
0
5
10 15
Topic
20
Figure 2: Characteristics of the five principal mixture components (clusters) associated with Senate data, based
upon joint analysis of the documents and the associated vote matrix. Top row: Principal two dimensions of the
(t)
latent matrix features xj , with the ellipses denoting the standard deviation about the mean of the five clusters.
The points reflect specific legislation, with results shown for the 101st and 110th Congresses. The colors of
the ellipses are linked to the colors of the topic distributions. Bottom row: Distribution of topics c?m for the
five clusters (number indices arbitrary). T = 20 topics are considered, and each cluster is characterized by a
distribution over topics c?m (bottom row), as well as an associated feature (top row) for the matrix.
Topic 1
annual
research
economy
doe
food
sale
motor
crop
county
employee
Topic 11
tax
budget
annual
debtor
bankruptcy
foreign
taxpayer
credit
property
product
Topic 2
Topic 3
military
fuel
defense
transport
this
public
product
research
expense
agriculture
restore
export
public
electrical
annual
forest
universal
foreign
independence
water
Topic 12
Topic 13
tax
military
health
transportation
drug
safety
medicaid
air
candidate
defense
cost
health
children
guard
aggregate
annual
law
foreign
medical
waste
Topic 4
Topic 5
Topic 6
military
defense
navy
air
guard
research
closure
naval
ndaa
bonus
public
research
transport
annual
children
train
expense
law
student
organization
law
violate
import
goal
bureau
commerce
registration
reform
risk
list
Topic 14
Topic 15
violence
victim
drug
alien
employee
visa
youth
penalty
criminal
minor
tax
health
annual
cost
law
this
mail
?nancial
liability
loan
Topic 16
Topic 7
Topic 9
Topic 10
employee
public
cost
defense
domestic
work
inspect
bureau
tax
build
foreign
law
terrorist
criminal
agriculture
justice
terror
engage
economy
crime
Topic 18
Topic 19
Topic 20
annual
health
this
public
defend
esea
product
fcc
carrier
columbia
immigration
juvenile
?rearm
sentence
alien
crime
dh
train
convict
prison
alien
civil
parent
ha
immigrant
labor
criminal
free
term
petition
Topic 8
defense
penalty
civilian
expense
iraq
health
train
drug
property
health
credit
cost
public
foreign
work
environment
medical
air
organization
depend
Topic 17
medicare
loan
tax
environment
ssa
train
annual
property
deduct
science
hospital
annual
parent
law
bankruptcy transportation
debtor
high
male
?ve
Figure 3: Top-ten most probable words associated with the Senate-legislation topics, 1989-2008.
the documents from new legislation. In this manner, each of the new documents is mapped to one
of the mixture-dependent distributions on topics {c?m }m=1,M . If a particular piece of legislation is
mapped to cluster m (with mapping based upon the words alone), it is then assumed that the latent
matrix feature associated with the legislation is the associated cluster mean ??m (learned via the
modeling of 1989-2007 data).
Once this mapping of legislation to matrix latent space is achieved, and using the senator?s latent
(t)
(t)
feature vector yi from 2007, we may readily compute < yi , ??m >, and via the probit link
function the probability of a ?yes? vote is quantified, for Senator i on new legislation LN . This is
(t)
(t)
the model in (1), with ?i = 0 and ?j = 0. Based upon Figure 4 (last plot), the approximation
(t)
?i
(t)
= 0 is reasonable. The legislation-dependent random effect ?j is expected to be important
6
0.2
0
102 Senators
26 Votes
20
40
60
80
100
Senators (sorted by predicted probability)
0.4
0.2
0
102 Senators
43 Votes
Probability of Voting Yes
0.6
0.4
0
0.6
0.4
0.2
0
20
40
60
80
100
Senators (sorted by predicted probability)
Empirical vs. Predicted Voting Frequency for Cluster 4
1
Prediction
Democrat
0.8
Republican
0.2
Empirical vs. Predicted Voting Frequency for Cluster 3
1
Prediction
Democrat
0.8
Republican
Probability of Voting Yes
0.4
0.6
102 Senators
29 Votes
20
40
60
80
100
Senators (sorted by predicted probability)
102 Senators
46 Votes
20
40
60
80
100
Senators (sorted by predicted probability)
0
10
Log-posterior probability
0.6
Empirical vs. Predicted Voting Frequency for Cluster 2
1
Prediction
Democrat
0.8
Republican
Probability of Voting Yes
Probability of Voting Yes
Empirical vs. Predicted Voting Frequency for Cluster 1
1
Prediction
Democrat
0.8
Republican
?
?
?1
10
?2
10
?3
10
?4
10
?4
?2
0
?, ?
2
4
Figure 4: First four plots: Predicted probability of voting ?Yes? given only the legislation text for 2008, based
upon the model learned using vote-legislation data from 1898?2007. The dots (colored by party affiliation)
show the empirical voting frequencies for all legislation in the cluster, from 2008 (not used in model). Only
four clusters are utilized during session 2008, out of five inferred by the model for the overall period 1989?2007.
Last plot: Estimated log p(?) and log p(?). Note how p(?) is much more sharply peaked near zero.
(t)
(t)
for legislation for which most senators vote ?yes? (large positive ?j ) or ?no? (large negative ?j ).
(t)
When testing the predictive quality of the model for the held-out year 2008, we assume ?j = 0
(since this parameter cannot be inferred without modeling the text and votes jointly, while for 2008
we are only modeling the documents); we therefore only test the model on legislation from 2008
for which less than 90% of the senators agreed, such legislation assumed corresponding to small
(t)
|?j | (it is assumed that in practice it would be simple to determine whether a piece of legislation is
likely to be near-unanimous ?yes? or ?no?, and therefore model-based prediction of votes for such
legislation is deemed less interesting).
In Figure 4 we compare the predicted, probit-based probability of a given senator voting ?yes? for
legislation within clusters 1-4 (see Figure 2); the points in Figure 4 represent the empirical data for
each senator, and the curve represents the predictions of the probit link function. These results are
deemed to be remarkably good. In Figure 4, the senators along each horizontal axis are ordered
according to the probability of voting ?yes?.
One interesting issue that arises in this prediction concerns clusters 1 and 4 in Figure 2, and the associated predictions for the held-out year 2008, in Figure 4. Since the distributions of these clusters
over topics is very similar, the documents alone cannot distinguish between clusters 1 and 4. However, we also have the sponsor of each piece of legislation, and based upon the data from 1989-2007,
if a piece of legislation from 2008 is mapped to either cluster 1 or 4, it is disambiguated based upon
the party affiliation of the sponsor (cluster 1 is a Republican viewpoint on these topics, while cluster
4 is a Democratic viewpoint, based upon voting records from 1989-2007).
3.3 Time evolution of congressman and legislation
The above joint analysis of text and votes was restricted to 1989-2008, since the documents (legislation) were only available for those years. However, the dataset contains votes on all legislation
from 1789 to the present, and we now analyze the vote data from 1789-1988. Figure 5 shows
snapshots in time of the latent space for voters and legislation, for the House of Representatives
(similar results have been computed for the Senate, and are omitted for brevity; as supplemental material, at http://sites.google.com/site/matrixtopics/ we present movies of how
legislation and congressman evolve across all times, for both the House and Senate). Five features
were inferred, with the two highest-variance features chosen for the axes. The blue symbols denote
Democratic legislators, or legislation sponsored by a Democrat, and the red points correspond to
Republicans. Results like these are of interest to political scientists, and allow examination of the
degree of partisanship over time, for example.
7
10
8
10
Democrat
Republican
Others
Year: 1789?1790
6
8
4
10
Democrat
Republican
Others
Year: 1939?1940
6
8
4
2
10
Democrat
Republican
Others
Year: 1947?1948
6
8
4
2
10
Democrat
Republican
Others
Year: 1963?1964
6
8
2
4
2
2
0
0
0
0
0
?2
?2
?2
?2
?2
?4
?4
?4
?4
?4
?6
?6
?6
?6
?6
?8
?8
?8
?8
?8
?10
?10
?10
?10
?10
?12
?12
?12
?12
?12
?6
?4
?2
0
2
6
4
4
?6
?4
?2
0
2
6
Democrat
Republican
Others
Year: 1789?1790
4
4
?6
?4
?2
0
2
6
Democrat
Republican
Others
Year: 1939?1940
4
4
?6
?4
?2
0
2
6
Democrat
Republican
Others
Year: 1947?1948
4
4
2
0
0
0
0
0
?2
?2
?2
?2
?2
?4
?4
?2
0
2
4
6
8
?6
?4
?6
?4
?2
0
2
4
6
8
?6
?4
?2
0
2
4
6
8
?6
0
2
4
Democrat
Republican
Others
2
?4
?6
?2
4
2
?6
?4
Year: 1983?1984
2
?6
?6
6
Democrat
Republican
Others
Year: 1963?1964
2
?4
Democrat
Republican
Others
Year: 1983?1984
6
4
?4
?6
?4
?2
0
2
4
6
8
?6
?6
?4
?2
0
2
4
6
Figure 5: Congressman (top) and legislation (bottom) in latent space for sessions 1?98 of the House of Representatives. The Democrat/Republican separation is usually sharper than for the Senate, and frequently only
the partisan information seems to matter. Note the gradual rotation of the red/blue blue axis. Best viewed
electronically, zoomed-in.
3.4 Additional quantitative tests
One may ask how well this model addresses the more-classical problem of estimating the values of
matrix data that are missing uniformly at random, in the absence of documents. To examine this
question, we considered binary Senate vote data from 1989-2008, and removed a fraction of the
votes uniformly at random, and then use the proposed time-evolving matrix model to process the
observed data, and to compute the probability of a ?yes? vote on all missing data (via the probit link
function). If the probability is larger than 0.5 the vote is set to ?yes?, and otherwise it is set to ?no?.
We compare our time-evolving model to [12], with the addition of a probit link function; for the
latter we processed all 20 years as one large matrix, rather than analyzing time-evolving structure.
Up to 40% missingness, the proposed model and a modified version of that in [12] performed almost
identically, with an average probability of error (on the binary vote) of approximately 0.1. For
greater than 40% missingness, the proposed time-evolving model manifested a ?phase transition?,
and the probability of error increased smoothly up to 0.3, as the fraction of missing data rose to
80%; in contrast, the generalized model in [12] (with probit link) continued to yield a probability
of error of about 0.1. The phase transition of the proposed model is likely manifested because the
entire matrix is partitioned by year, with a linkage between years manifested via the Markov process
between legislators (we don?t analyze all data by one contiguous, large matrix). The phase transition
is expected based on the theory in [5], when the fraction of missing data gets large enough (since
the size of the contiguous matrices analyzed by the time-evolving model is much smaller than that
of the entire matrix, such a phase transition is expected with less missingness than via analysis of
the entire matrix at once).
While the above results are of interest and deemed encouraging, such uniformly random missingness
on matrix data alone is not the motivation of the proposed model. Rather, traditional matrix-analysis
methods [10, 15, 1, 12, 13, 18] are incapable of predicting votes on new legislation based on the
words alone (as in Figure 4), and such models do not allow analysis of the time-evolving properties
of elements of the matrix, as in Figure 5.
4
Conclusions
A new model has been developed for the joint analysis of time-evolving matrices and associated
documents. To the authors? knowledge, this paper represents the first integration of research heretofore performed separately on topic models and on matrix analysis/completion. The model has been
implemented efficiently via Gibbs sampling. A unique set of results are presented using data from
the US Senate and House of Representatives, demonstrating the ability to predict the votes on new
legislation, based only on the associated documents. The legislation data was considered principally
because it was readily available and interesting in its own right; however, the proposed framework
is of interest for many other problems. For example, the model is applicable to analysis of timeevolving relationships between multiple entities, augmented by the presence of documents (e.g.,
links between websites, and the associated document content).
Acknowledgement
The research reported here was supported by the US Army Research Office, under grant W911NF08-1-0182, and the Office of Naval Research under grant N00014-09-1-0212.
8
8
References
[1] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. A new approach to collaborative filtering:
operator estimation with spectral regularization. J. Machine Learning Research, 2009.
[2] D. M. Blei and J. D. Lafferty. Dynamic topic models. Proceedings of the 23rd International
Conference on Machine Learning, pages 113?120, 2006.
[3] D. M. Blei and J. D. Lafferty. A correlated topic model of science. The Annals of Applied
Statistics, 1(1):17?35, 2007.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[5] E.J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[6] J. Cinton, S. Jackman, and D. Rivers. The statistical analysis of roll call data. Am. Political Sc.
Review, 2004.
[7] T. S. Ferguson. A bayesian analysis of some nonparametric problems. The Annals of Statistics,
1(2):209?230, 1973.
[8] P. D. Hoff. Multiplicative latent factor models for description and prediction of social networks.
Computational and Mathematical Organization Theory, 2009.
[9] J. Ishwaran and L. James. Gibbs sampling methods for stick-breaking priors. Journal of the
American Statistical Association, 96:161174, 2001.
[10] E. Meeds, Z. Ghahramani, R. Neal, and S. Roweis. Modeling dyadic data with binary latent
factors. In Advances in NIPS, pages 977?984, 2007.
[11] I. Pruteanu-Malinici, L. Ren, J. Paisley, E. Wang, and L. Carin. Hierarchical bayesian modeling
of topics in time-stamped documents. IEEE Trans. Pattern Analysis Mach. Intell., 2010.
[12] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization with mcmc. In
Advances in NIPS, 2008.
[13] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in NIPS, 2008.
[14] J. Sethuraman. A constructive definition of dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[15] N. Srebro, J.D.M. Rennie, and T.S. Jaakkola. Maximum-margin matrix factorization. In Advances in NIPS, 2005.
[16] R. Thibaux and M.I. Jordan. Hierarchical beta processes and the indian buffet process. In
International Conference on Artificial Intelligence and Statistics, 2007.
[17] H. M. Wallach. Topic modeling: beyond bag of words. Proceedings of the 23rd International
Conference on Machine Learning, 2006.
[18] K. Yu, J. Lafferty, S. Zhu, and Y. Gong. Large-scale collaborative prediction using a nonparametric random effects model. In Proc. Int. Conf. Machine Learning, 2009.
9
| 4152 |@word version:1 seems:2 justice:1 closure:1 heretofore:2 gradual:1 yea:1 liu:2 series:1 loc:1 united:4 contains:1 denoting:1 document:52 existing:1 com:4 stemmed:1 must:1 readily:3 import:1 analytic:1 motor:1 plot:4 sponsored:1 update:1 depict:1 v:4 alone:4 generative:3 intelligence:1 website:3 ith:2 record:2 colored:1 regressive:1 blei:3 provides:1 node:1 five:8 mathematical:1 guard:2 along:1 direct:1 beta:6 ik:5 replication:1 qualitative:1 nay:1 manner:2 inter:3 expected:4 themselves:1 examine:2 prison:1 frequently:1 cand:1 salakhutdinov:2 gov:1 cpu:3 food:1 encouraging:1 considering:3 domestic:1 provided:1 distri:1 underlying:3 matched:1 estimating:2 mass:1 fuel:1 bonus:1 what:1 cm:4 affirmative:1 developed:1 supplemental:2 quantitative:2 every:1 voting:16 fcc:1 stick:3 exchangeable:1 sale:1 medical:2 grant:2 appear:1 safety:1 carrier:1 engineering:1 positive:1 treat:1 congress:7 petition:1 scientist:1 despite:1 mach:1 analyzing:5 approximately:1 burn:1 voter:2 quantified:1 wallach:1 factorization:4 unique:4 taxpayer:1 commerce:1 testing:1 practice:1 procedure:2 area:1 universal:1 evolving:13 drug:3 mult:1 empirical:6 vert:1 word:20 integrating:1 get:2 cannot:2 undesirable:1 operator:1 put:1 context:1 risk:1 www:1 bill:2 imposed:1 demonstrated:3 missing:7 transportation:2 attention:1 independently:1 convex:1 focused:1 simplicity:3 insight:2 continued:1 handle:1 juvenile:1 analogous:2 annals:2 construction:6 play:1 engage:1 duke:4 element:1 particularly:1 utilized:1 iraq:1 observed:2 bottom:4 role:1 export:1 wang:3 electrical:2 connected:1 removed:2 highest:1 rose:1 environment:2 stopword:1 dynamic:1 depend:1 predictive:3 upon:11 eric:2 meed:1 uh:2 joint:13 grown:1 train:4 separated:1 distinct:1 artificial:1 sc:1 aggregate:1 abernethy:1 navy:1 victim:1 larger:2 rennie:1 drawing:1 otherwise:2 favor:2 statistic:4 ability:2 jointly:5 itself:2 product:5 zoomed:1 remainder:1 hadamard:1 tax:6 roweis:1 description:1 parent:2 cluster:39 help:1 sponsorship:1 illustrate:1 completion:2 gong:1 stat:1 minor:1 received:1 implemented:2 predicted:10 involves:1 indicate:1 congressman:4 material:2 implementing:1 public:6 assign:1 generalization:1 county:1 probable:2 extension:2 around:2 considered:8 credit:2 lawrence:2 mapping:3 predict:3 vary:1 commonality:1 consecutive:3 omitted:2 agriculture:2 estimation:1 proc:1 applicable:2 bag:2 vice:1 tool:1 reflects:1 gaussian:1 modified:1 ck:1 rather:2 jaakkola:1 office:2 ax:3 naval:2 bernoulli:2 indicates:1 alien:3 political:5 contrast:1 defend:1 am:1 inference:1 economy:2 dependent:5 foreign:5 ferguson:1 entire:5 bt:5 typically:1 interested:1 tao:1 issue:1 overall:1 aforementioned:1 reform:1 retaining:1 development:1 integration:1 hoff:1 once:4 evgeniou:1 having:1 ng:1 sampling:5 atom:2 represents:3 yu:1 carin:3 constitutes:1 peaked:1 future:1 others:10 employ:2 partisan:2 gamma:9 tightly:1 simultaneously:1 individual:2 ve:1 senator:16 intell:1 phase:4 immigration:1 organization:3 interest:6 highly:1 mnih:2 jackman:1 male:1 analyzed:2 mixture:10 pc:1 regularizers:1 held:2 filled:1 incomplete:3 pmf:1 circle:1 increased:1 column:11 modeling:13 military:3 civilian:1 contiguous:2 cost:4 deviation:1 entry:1 subset:1 examining:1 reported:1 connect:1 thibaux:1 varies:1 dir:7 interpretative:1 person:4 density:1 st:2 international:3 river:1 probabilistic:3 together:1 w1:1 again:2 reflect:1 conf:1 american:1 potential:1 summarized:1 waste:1 student:1 int:1 matter:2 explicitly:1 vi:2 piece:10 multiplicative:1 performed:11 tion:1 view:1 analyze:3 linked:2 bution:1 red:3 contribution:1 collaborative:3 air:3 voted:1 roll:3 variance:1 characteristic:3 who:1 efficiently:1 correspond:2 yield:3 yes:13 bayesian:8 iid:1 none:1 ren:1 researcher:2 ah:3 inform:2 email:2 definition:1 frequency:5 james:1 naturally:1 associated:35 couple:1 stop:2 violence:1 dataset:1 popular:1 ask:1 recall:1 knowledge:2 color:2 dimensionality:3 infers:1 cj:5 agreed:1 reflecting:1 appears:3 strongly:1 lifetime:1 just:1 correlation:1 horizontal:1 transport:2 assessment:1 google:3 porter:1 logistic:1 lda:3 quality:1 effect:9 timeevolving:1 evolution:1 hence:2 regularization:1 bankruptcy:2 neal:1 during:2 illustrative:1 generalized:1 plate:1 dehong:2 l1:2 silva:2 common:1 rotation:1 multinomial:1 discussed:1 association:1 trait:1 employee:3 significant:2 zil:1 versa:1 gibbs:10 paisley:1 rd:2 session:7 similarly:2 jg:1 dj:2 dot:1 access:1 etc:1 dominant:1 posterior:3 own:1 recent:3 perspective:1 n00014:1 manifested:6 incapable:1 binary:17 wv:1 jorge:1 affiliation:3 yi:9 seen:3 additional:1 greater:1 impose:1 employed:1 determine:1 period:1 multiple:1 desirable:1 full:1 infer:4 violate:1 legislative:4 smooth:1 characterized:6 youth:1 offer:2 bach:1 permitting:1 ellipsis:2 controlled:1 impact:1 sponsor:3 prediction:13 regression:1 crop:1 iteration:5 represent:4 achieved:1 addition:3 remarkably:1 separately:2 legislator:9 country:1 appropriately:1 lafferty:3 jordan:2 integer:2 call:3 near:3 presence:2 easy:1 identically:1 enough:1 xj:12 isolation:1 independence:1 inner:1 whether:2 motivated:1 defense:5 gb:1 linkage:1 penalty:2 speech:1 matlab:1 generally:1 detailed:1 nonparametric:3 ten:2 concentrated:1 processed:1 http:3 exist:1 estimated:3 blue:4 key:1 four:3 demonstrating:1 drawn:11 stamped:1 gmm:4 registration:1 graph:1 relaxation:1 fraction:4 year:26 sum:1 missingness:4 legislation:60 powerful:1 place:1 almost:1 reasonable:3 separation:1 draw:8 followed:1 simplification:2 distinguish:1 nancial:1 yielded:1 annual:9 infinity:1 sharply:1 constrain:1 dominated:1 govtrack:1 aspect:1 disambiguated:1 relatively:1 department:2 according:1 across:4 describes:1 smaller:1 visa:1 partitioned:1 making:2 restricted:1 principally:1 ln:8 equation:1 conjugacy:1 previously:1 discus:1 mechanism:1 needed:1 available:8 ishwaran:1 hierarchical:3 spectral:1 buffet:1 bureau:2 thomas:1 recipient:1 denotes:3 dirichlet:10 binomial:1 clustering:2 running:1 graphical:2 opportunity:1 top:5 ghahramani:1 build:1 unanimous:1 classical:1 unchanged:1 objective:1 question:6 already:1 dependence:1 traditional:1 kth:1 link:10 separate:1 mapped:4 entity:6 nx:2 parame:1 topic:75 mail:1 water:1 assuming:1 code:2 index:7 relationship:6 pointwise:1 providing:1 sinica:1 dunson:2 sharper:1 relate:3 expense:3 negative:2 implementation:1 perform:1 inspect:1 observation:1 snapshot:1 markov:1 discarded:1 truncated:6 arbitrary:2 inferred:6 rating:1 david:1 bk:1 introduced:2 pair:2 required:2 criminal:3 optimized:2 connection:1 security:1 crime:2 sentence:1 merges:1 distinction:1 learned:3 hour:4 nip:4 trans:1 address:1 beyond:1 below:5 usually:2 pattern:1 ujt:4 democratic:3 memory:1 power:2 difficulty:1 examination:1 restore:1 predicting:3 indicator:1 immigrant:1 senate:15 medicare:2 zhu:1 movie:4 republican:18 library:1 axis:3 sethuraman:2 deemed:3 coupled:2 auto:2 metadata:2 dating:1 health:7 columbia:1 text:7 prior:9 literature:2 acknowledgement:1 review:1 evolve:2 law:6 probit:9 interesting:4 allocation:2 filtering:2 srebro:1 integrate:1 degree:1 imposes:1 viewpoint:4 share:1 row:10 placed:2 last:4 truncation:1 liability:1 jth:2 distribu:1 free:1 electronically:1 allow:2 supported:1 stemmer:1 terrorist:1 sparse:1 ghz:1 curve:1 dimension:6 vocabulary:2 transition:4 author:3 collection:3 bm:2 historical:1 party:4 constituting:1 social:3 transaction:1 confirm:1 active:1 corpus:1 assumed:8 don:1 un:1 latent:24 additionally:1 nature:1 forest:1 investigated:1 complex:1 cl:3 necessarily:1 posted:1 pk:4 statistica:1 constituted:3 motivation:2 hyperparameters:2 terror:1 child:2 dyadic:1 augmented:1 site:6 representative:6 ny:2 candidate:1 house:11 breaking:3 specific:4 xt:2 jt:1 symbol:1 list:3 ssa:1 concern:3 grouping:1 ci:1 budget:1 sparseness:1 margin:1 civil:1 democrat:16 smoothly:1 simply:3 likely:3 sender:1 army:1 labor:1 ordered:1 kbk0:1 ters:1 corresponds:6 dh:1 goal:1 sorted:4 viewed:1 absence:4 content:2 change:5 loan:2 specifically:7 infinite:1 uniformly:3 sampler:1 principal:4 total:1 hospital:1 e:1 vote:48 people:3 support:1 latter:1 arises:1 brevity:1 indian:1 constructive:1 mcmc:1 correlated:1 |
3,482 | 4,153 | Cross Species Expression Analysis using a Dirichlet
Process Mixture Model with Latent Matchings
Ziv Bar-Joseph
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
Hai-Son Le
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
Abstract
Recent studies compare gene expression data across species to identify core and
species specific genes in biological systems. To perform such comparisons researchers need to match genes across species. This is a challenging task since
the correct matches (orthologs) are not known for most genes. Previous work in
this area used deterministic matchings or reduced multidimensional expression
data to binary representation. Here we develop a new method that can utilize soft
matches (given as priors) to infer both, unique and similar expression patterns
across species and a matching for the genes in both species. Our method uses
a Dirichlet process mixture model which includes a latent data matching variable. We present learning and inference algorithms based on variational methods
for this model. Applying our method to immune response data we show that it
can accurately identify common and unique response patterns by improving the
matchings between human and mouse genes.
1
Introduction
Researchers have been increasingly relying on cross species analysis to understand how biological
systems operate. Sequence based methods have been successfully applied to identify and characterize coding and functional non coding regions in multiple species [1]. However, sequence information is static and thus provides only partial view of cellular activity. More recent studies attempt
to integrate sequence and gene expression data from multiple species [2, 3, 4]. Unlike sequence,
expression levels are dynamic and differ across time and conditions. By combining expression and
sequence data researchers were able to identify both ?core? and ?divergent? genes. ?Core? genes
are similarly expressed across species and are useful for constructing models of conserved systems,
for example the cell cycle [2]. ?Divergent? genes are similar in sequence but differ in expression
across species. These are useful for identifying species specific responses, for example why some
pathogens are resistant to drugs while others are not [3].
While useful, cross species analysis of expression data is challenging. In addition to the regular
issues with expression data (noise, missing values, etc.) when comparing expression levels across
species researchers need to match genes across species. For most genes the correct match in another
species (known as ortholog) is not known. A number of methods have been suggested to solve the
matching problem. The first set of methods is based on a one to one deterministic assignment by
relying on top sequence matches. Such an assignment can be used to concatenate the expression
vectors for matched genes across species and then cluster the resulting vectors. For example, Stuart
et al. [5] constructed ?metagenes? consisting of top sequence matches from four species. These
were used to cluster the data from multiple species to identify conserved and divergent patterns.
Bergmann et al. [6] defined one of the species (species A) as a reference and first clustered genes
in A. They then used matched genes in the second species (B) as starting points for clustering
1
genes in B. When the clustering algorithm converges in B, genes that remain in the cluster are
considered ?core? whereas genes that are removed are ?divergent?. Quon et al. [4] used a mixture of
Gaussians model, which takes as input the expression data of orthologous genes and a phylogenetic
tree connecting the species, to reconstruct the expression profiles as well as detecting divergent
links in the phylogeny. The second set of methods allowed for soft matches but was either limited to
analyzing binary or discrete data with very few labels. For example, Lu et al. combined experiments
from multiple species by using Markov Random Fields [7] and Gaussian Random Fields [8] in which
edges represent sequence similarity and potential functions constrain similar genes across species to
have a similar expression pattern.
While both approaches led to successful applications, they suffer from drawbacks that limit their
use in practice. In many cases the top sequence match is not the correct ortholog and a deterministic
assignment may lead to wrong conclusions about the conservation of genes. Methods that have
used soft assignments were limited to summarization of the data (up or down regulated) and could
not utilize more complex profiles. Here we present a new method that uses soft assignments to
allow comparison and clustering across species of arbitrary expression data without requiring prior
knowledge on the phylogeny. Our method takes as input expression datasets in two species and a
prior on matches between homologous genes in these species (derived from sequence data). The
method simultaneously clusters the expression values for both species while computing a posterior
for the assignment of orthologs for genes. We use Dirichlet Process model to automatically detect
the number of clusters.
We have tested our method on simulated and immune response data. In both cases the algorithm
was able to find correct matches and to improve upon methods that used a deterministic assignment.
While the method was developed for, and applied to, biological data, it is general and can be used to
address other problems including matchings of captions to images (see Section 5).
2
Problem definition
In this section, we first describe in details the cross species analysis problem for gene expression
data. Next, we formalize this as a general clustering and matching problem for cases in which the
matches are not known in advance.
Using microarrays or new sequencing techniques researchers can monitor the expression levels of
genes under certain conditions or at specific time points. For each such measurement we obtain a
vector whose elements are the expression values for all genes (there are usually thousands of entries
in each vector). We assume that the input consists of microarray experiments from two species and
each species has a different set of genes. While the exact matches between genes in both species
are not known for most genes, we have a prior for gene pairs (one from each species) which is
derived from sequence data [9]. Our goal is to simultaneously cluster the genes in both species.
Such clustering can identify coherent and divergent responses between the species. In addition, we
would like to infer for each gene in one species whether there exists a homolog that is similarly
expressed in the other species and if so, who.
The problem can also be formalized more generally in the following way. Denote by x =
[x1 , x2 , . . . , xnx ] and y = [y1 , y2 , . . . , yny ] the datasets of samples from two different experiment
settings, where xi ? ?px and yj ? ?py . In addition, let M be a sparse non-negative nx ? ny matrix
that encodes prior information regarding the matching of samples in x and y. We define the match
probability between xi and yj as follows:
p(xi and yj are matched) =
M(i, j)
= ?i,j
Ni
p(xi is not matched) =
1
= ?i,0
Ni
(1)
Pny
M(i, j). ?i,0 is the prior probability that xi is not matched to any element
where Ni = 1 + j=1
in Y . We use ?i to denote the vector (?i,0 , . . . , ?i,ny ). Finally, let mi ? {0, 1, . . . . , ny } be the
latent matching variable. If mi = 1 we say that xi is matched to ymi . If mi = 0 for we say that xi
has no match in y. Our goal is to infer both, the latent variables mj ?s and cluster membership for
pairs of samples (xi , ymi )?s. The following notations are used in the rest of the paper. Lowercase
normal font, e.g x, is used for a single variable and lowercase bold font, e.g x, is used for vectors.
Uppercase bold roman letters, such as M, denote matrices. Uppercase letters, e.g X, are used to
represent random variables and E[X] represents the expectation of a random variable X.
2
3
Model
Model selection is an important problem when analyzing real world data. Many clustering algorithms, including Gaussian mixture models, require as an input the number of clusters. In addition to
domain knowledge, this model selection question can be addressed using cross validation. Bayesian
nonparametric methods provide an alternative solution allowing the complexity of the model to grow
based on the amount of available data. Under-fitting is addressed by the fact that the model allows
for unbounded complexity while over-fitting is mitigated by the Bayesian assumption. We use this
approach to develop a nonparametric model for clustering and matching cross species expression
data. Our model, termed Dirichlet Process Mixture Model with Latent Matchings (DPMMLM) extends the popular Dirichlet Process Mixture Model to cases where priors are provided to matchings
between vectors to be clustered.
3.1
Dirichlet Process
Let G0 a probability measure on a measurable space. We write G ? DP (?, G0 ) if G is a random
probability measure drawn from a Dirichlet process (DP). The existence of the Dirichlet process was
first proven by [10]. Furthermore, measures of G are discrete with probability one. This property
can be seen from the explicit stick-breaking construction due to Sethuraman [11] as follows.
?
Let (Vi )?
i=1 and (?i )i=1 be independent sequences of i.i.d random variables: Vi ? Beta(1, ?) and
?i ? G0 . Then a random measure G defined as
? i = Vi
i?1
Y
(1 ? Vj )
G =
?
X
? i ? ?i
(2)
i=1
j=1
where ?? is a probability measure concentrated at ?, is a random probability measure distributed
according to DP(?, G0 ) as shown in [11] .
3.2
Dirichlet Process Mixture Model (DPMM)
Dirichlet process has been used as a nonparametric prior on the parameters of a mixture model. This
model is referred to as Dirichlet Process Mixture Model. Let z be the mixture membership indicator
variables for data variables x. Using the stick-breaking construction in (2), the Dirichlet process
mixture model is given by
G ? DP(?, G0 )
zi , ? i | G ? G
xi | zi , ?i ? F (?i )
(3)
where F (?i ) denotes the distribution of the observation xi given parameter ?i .
3.3
Dirichlet Process Mixture Model with Latent Matchings (DPMMLM)
In this section, we describe the new mixture model based on DP with latent variables for data
matching between x and y. We use FX (?), FY (?) to denote the marginal distribution of X and
Y respectively; and FX|Y (y, ?) to denote the conditional distribution of X given Y . The parameter
? is a random variable of the prior distribution G0 (? | ?0 ) with hyperparameter ?0 . Also, let zi be
the mixture membership of the sample pair (xi , ymi ). Our model is given by:
G
zi , ? i | G
mi | ?i
| mi , z i , ? i
?
?
?
?
DP(?, G0 )
G
Discrete(?i )
FY (?i ), if mi > 0
ym i
FX|Y (ymi , ?i ) if mi > 0
x i | mi , z i , ? i , y ?
FX (?i )
otherwise
(4)
The major difference between our model and a regular DPMM is the dependence of xi on y if
3
mi > 0. In other words the assignment of x to a cluster depends on both, its own expression levels
and the levels of the y component to which it is matched. If x is not matched to any y component
then we resort to the marginal distribution FX of the mixture.
3.4
Mean-field variational methods
For probabilistic models, mean-field variational methods [12, 13] provide a deterministic and
bounded approximation to the intractable joint probability of observed and hidden variables. Briefly,
given a model with observed variables x and hidden variables h, we would like to compute log p(x),
which requires us to marginalize over all hidden variables h. Since p(x, h) is often intractable, we
can find a tractable probability q(h) that gives the best lower bound of log p(x) using Jensen ?s
inequality:
Z
log p(x) ?
q(h) log p(x, h) ? q(h) log q(h) dh = Eq [log p(x, h)] ? Eq [log q(h)] (5)
h
Maximizing this lower bound is equivalent to finding the distribution q(h) that minimizes the KL
divergence between q(h) and p(h | x). Hence, q(h) is the best approximation model within the
chosen parametric family.
3.5
Variational Inference for DPMMLM
Although the DP mixture model is an ?infinite? mixture model, it is intractable to solve the optimization problem when allowing for infinitely many variables. We thus follow the truncation approach
used in [14], and limit the number of cluster to K. When K is chosen to be large enough, the distribution is a drawn from the Dirichlet process [14]. To restrict the number of clusters to K, we set
VK = 1 and thus obtain ?i>K = 0 in (2). The likelihood of the observed data is
Z
nx
Y
p(zi | v)
p(? | ?0 ) p(v | ?)
p(x, y | ?, ?0 ) =
i=1
m,z,v,?
ny
K n
Y
mj ozik
m0i Y
?i,j fX|Y (xi | yj , ?k )fY (yj | ?k ) i
(6)
?i,0 fX (xi | ?k )
j=1
k=1
Qzi ?1
where p(zi | v) = vzi k=1 (1 ? vk ) and v is the stick breaking variables given in Section 3.1. The
first part of (6) p(? | ?0 ) p(v | ?) is the likelihood of the model parameters and the second part is
the likelihood of the assignments to clusters and matchings.
Following the variational inference framework for conjugate-exponential graphical models [15] we
choose the distribution that factorizes over {mi , zi }i=1,...,nx , {vk }k=1,...,K and {?k }k=1,...,K?1 as
follows:
ny
nx
K
K?1
Y
Y
Y
j Y
(7)
q?k (?k )
q?i,j (zi )mi
q?k (vk )
q?i (mi )
q(m, z, v, ?) =
i=1
j=0
k=1
k=1
where q?i (mi ) and q?i,j (zi ) are multinomial distributions and q?k (vk ) are beta distributions. These
distributions are conjugate distributions for the likelihood of the parameters in (6). q?k (?k ) requires
special treatment due to the coupling of the marginal and conditional distributions in the likelihood.
These issues are discussed in details in section 3.5.2.
Using this variational distribution we obtain a lower bound for the log likelihood:
log p(x, y | ?, ?0 ) ? E[log p(? | ?0 )] + E[log p(V | ?)]
ny K
nx n
o
X
X
X
E[Mij Zik ](log ?i,j + ?i,j,k ) ? E[log q(M, Z, V, ?)]
E[log p(Zi | V)] +
+
i=1
j=0 k=1
where all expectations are with respect to the distribution q(m, z, v, ?) and
E[log fX|Y (Xi | Yj , ?k )] + E[log fY (Yj | ?k )] if j > 0
?i,j,k =
E[log fX (Xi | ?k )]
if j = 0
4
(8)
To compute the terms in (8), we note that
E[Mij Zik ] = ?i,j ?i,j,k = ?i,j,k
E[log p(Zi | V)] =
K
X
q(zi > k)E[log(1 ? Vk )] + q(zi = k)E[log Vk ]
k=1
where q(zi > k) =
3.5.1
Pny PK
j=0
t=k+1
?i,j,t and q(zi = k) =
Pny
j=0
?i,j,k .
Coordinate ascent inference algorithm
The lower bound above can be optimized by a coordinate ascent algorithm. The update rules for
all terms except for the q? (?), are presented below. These are direct applications of the variational
inference for conjugate-exponential graphical models [15]. We discuss the update rule for q? (?) in
section 3.5.2.
? Update for q?k (vk ):
?k,1 = 1 +
ny
nx X
X
?i,j,k
?k,2 = ? +
i=1 j=0
ny
nx X
K
X
X
?i,j,t
i=1 j=0 t=k+1
? Update for q?i,j (zi ) and q?i (mi ):
?i,j,k ? exp ?i,j,k +
k?1
X
E[log(1 ? Vk )] + E[log Vk ]
k=1
?i,j ? exp log ?i,j +
3.5.2
K
X
?i,j,k ?i,j,k +
k?1
X
E[log(1 ? Vk )] + E[log Vk ]
k=1
k=1
Application of the model to multivariate Gaussians
The previous sections described the model in a general terms. In the rest of this section, and in
our experiments, we focus on data that is assumed to be distributed as a multivariate Gaussian with
unknown mean and covariance matrix. The prior distribution G0 is then given by the conjugate prior
Gaussian-Wishart distribution. In a classical DP Gaussian Mixture Model with Gaussian-Wishart
prior, the posterior distribution of the parameters could be computed analytically. Unfortunately,
in our model, the coupling of the conditional and marginal distribution in the likelihood makes it
difficult to derive analytical formulas
for theposterior distribution. Note that if (X, Y ) ? N (?, ?)
?X ?XY
then X ? N (?X , ?X ), Y ? N (?Y , ?Y ) and
with ? = (?X , ?Y ) and ? =
?Y X ?Y
?1
X|Y = y ? N (?X + ?XY ??1
Y (y ? ?Y ), ?X ? ?XY ?Y ?Y X ).
(9)
Therefore, we introduce an approximation distribution for the datasets which decouples the marginal
and conditional distributions as follows:
fX (x | ?X , ?X ) = N (?X , ? = ??1
X )
fY (y | ?Y , ?Y ) = N (?Y , ? = ??1
Y )
fX|Y (x | y, W, b, ?X , ?X ) = N (?X + b ? Wy, ? = ??1
X )
where W is a px ? py projection matrix and ? is the precision matrix. In this approximation, we
assume that the covariance matrices of X and X|Y are the same. In other words, the covariance
of X is independent of Y . The matrix W models the linear correlation of X on Y , similar to
??XY ??1
Y in (9).
The priors for ?X , ?X and ?Y , ?Y are given by Gaussian-Wishart(GW) distributions. A flat improper prior is given to W and b, p0 (W) = 1, p0 (b) = 1 for all W, b. These assumptions lead
to decoupling of the marginal and conditional distributions. Therefore, the distribution q?k (?k ) can
now be factorized into two GW distributions and a distribution of W. To avoid over-cluttering
symbols, we omit the subscript k of the specific cluster k.
q??k (?k ) = GW (?X , ?X ) GW (?Y , ?Y ) g(W) g(b)
5
Posterior distribution of ?Y , ?Y : The update rules follow the standard posterior distribution of
Gaussian-Wishart conjugate priors.
Posterior distribution of ?X , ?X and W, b: Due to the coupling of ?X , ?X with W, we do a
coordinate ascent procedure to find the optimal posterior distribution. The posterior distribution of
W, b is a singleton discrete distribution g such that g(W? ) = 1, g(b? ) = 1.
? Update for posterior distribution of ?X , ?X :
?X = ?X0 + nX
mX =
?1
?1
SX
= SX0
+ VX +
where nX
VX =
?X0 nX
(x ? mX0 )(x ? mX0 )T
?X0 + nX
1
(?X0 mX0 + nX x)
?X
?X = ?X0 + nX
ny
nx X
X
ny
nx
X
1 X
?i,j,k , x =
=
?i,j,k (xi ? b + W? yj ) and
?i,0,k xi +
nX i=1
n=1 j=0
j=1
nx
X
?i,0,k (xi ?x)(xi ?x)T +
ny
X
j=1
i=1
?
?
?
?i,j,k (xi ?b+W? yj ?x)(xi ?b+W? yj ?x)T .
?
? Update for W , b : We find W , b that maximizes the log likelihood. Taking the derivative with respect to W? and solving for W? , we get
ny
ny
nx X
nx X
X
X
?1
?
T
W =
?i,j,k (xi ? mX ? b)yj
?i,j,k yj yjT
i=1 j=1
?
b =?
ny
nx X
X
i=1 j=1
4
4.1
i=1 j=1
ny
nx X
X
?i,j,k (xi ? mX + W yj ) /
?i,j,k
?
i=1 j=1
Experiments and Results
Simulated data
We demonstrate the performance of the model in identifying data matchings as well as cluster membership of datapoints using simulated data. To generate a simulated dataset, we sample 120 datapoints from a mixture of three 5-dimensional Gaussians with separation coefficient = 2 leading to
well separated mixtures1 . The covariance matrix was derived from the autocorrelation matrix for
a first-order autoregressive process leading to highly dependent components (? = 0.9). From these
samples, we use the first 3 dimensions to create 120 datapoints x = [x1 , . . . , x120 ]. The last two
dimensions of the first 100 datapoints are used to create y = [y1 , . . . , y100 ] (note that there are no
matches for 20 points in x). Hence, the ground truth M matrix is a diagonal 120 ? 100 matrix.
We selected a large value for the diagonal entries (? = 1000) in order to place a strong prior for
the correct matchings. Next, for t = 0, . . . , 20, we randomly select t entries on each row of M
and set them to ?2 r, where r ? ?21 . We repeat the process 20 times for each t to compute the
mean and standard deviation shown in Figure 1(a) and Figure 1(b). We compare the performance
of our model(DPMMLM) with a standard Dirichlet Process Mixture Model where each component
in x is matched based on the highest prior: {(xi , yj ? ) | i = 1, . . . , 100 and j ? = argmaxj M(i, j)}
(DPMM). For all models, the truncation level (K) is set to 20 and ? is 1. Figure 1(a) presents the
percentage of correct matchings inferred by DPMMLM and the highest prior matching. For DPMMLM, a datapoint xi is matched to the datapoint yj with the largest posterior probability ?i,j .
With the added noise, DPMMLM can still achieve an accuracy of 50% when the highest prior
matching leads to only 25% accuracy. Figure 1(b) and 1(c) show the Normalized Mutual Information (NMI) and Adjusted Rand index [17] for the clusters inferred by the two models compared to
the true clusters. As can be seen, while the percentage of correct matchings decreased with the added
noise, DPMMLM still achieves high NMI of 0.8 and Adjusted Rand index of 0.92. In conclusion,
by relying on matchings of points DPMMLM can still performs very well in terms of its ability to
identify correct clusters even with the high noise levels.
1
Following [16], a Gaussian mixture is c-separated if for each pair (i, j) of components, kmi ? mj k2 ?
c2 D max(?max
, ?max
) , where ?max denotes the maximum eigenvalue of their covariance.
i
j
6
DPMMLM
Top matches
80
70
60
50
40
30
20
0
5
10
15
20
Number of random entries per row (t)
(a) The % of correct matchings.
1
1
0.9
0.9
0.8
0.8
Adjusted Rand index
% of correct matchings
90
Normalized Mutual Information
100
0.7
DPMMLM
DPMM
0.6
0.5
0.4
0.3
0.7
0.6
DPMMLM
DPMM
0.5
0.4
0.3
0.2
0.2
0.1
0
5
10
15
0.1
0
20
Number of random entries per row (t)
(b) Normalized MI.
5
10
15
Number of random entries per row (t)
20
(c) Adjusted Rand index.
Figure 1: Evaluation of the result on simulated data.
4.2
Immune response dataset
Cluster 1
2
4
6
8
Cluster 4
2
4
6
8
Cluster 2
2
4
6
8
Cluster 3
2
4
6
Cluster 1
8
Cluster 5
2
4
6
8
2
4
6
8
Cluster 2
2
4
6
8
Cluster 3
4
4
2
2
0
0
?2
?2
?4
2
(a) DPMMLM
4
6
?4
8
(b) DPMM
Figure 2: The heatmap for clusters inferred for the immune response dataset.
We compared human and mouse immune response datasets to identify similar and divergent genes.
We selected two experiments that studied immune response to gram negative bacteria. The first was
a time series of human response to Salmonella [18]. Cells were infected with Salmonella and were
profiled at: 0.5h, 1h, 2h, 3h and 4h. The second looked at mouse response to Yersinia enterocolitica
with and without treatment by IFN-? [19]. We used BLASTN to compute the sequence similarity
(bit-score) between all human and mouse genes. For each species we selected the most varying 500
genes and expanded the gene list to include all matched genes in the other species with a bit score
greater than 75. This led to a set of 1476 human and 1967 mouse genes which we compared using
our model. The M matrix is the bit scores between human and mouse genes thresholded at 75.
The resulting clusters are presented in Figure 2(a). In that figure, the first five dimensions are human
expression values and each gene in human is matched to the mouse gene with the highest posterior.
Human genes which are not matched to any mouse gene in the cluster have a blank line on the
mouse side of the figure. The algorithm identified five different clusters. Clusters 1, 4 and 5 display
a similar expression pattern in human and mouse with genes either up or down regulated in response
to the infection. Genes in cluster 2 differ between the two species being mostly down regulated in
humans while slightly upregulated in mouse. Human genes in cluster 3 also differ from their mouse
orthologs. While they are strongly upregulated in humans, the corresponding mouse genes do not
change much.
7
P value Corrected P
2.86216e-10 <0.001
4.97408e-10 <0.001
7.82427e-10 <0.001
4.14320e-10 <0.001
4.49332e-09 <0.001
4.77653e-09 <0.001
8.27313e-09 <0.001
1.17013e-07
0.001
GO term description
regulation of apoptosis
regulation of cell death
protein binding
regulation of programmed cell death
positive regulation of cellular process
positive regulation of biological process
response to chemical stimulus
cytoplasm
P value Corrected P
5.06685e-07
0.001
6.15795e-07
0.001
7.70651e-07
0.001
7.78266e-07
0.002
1.09778e-06
0.002
1.42704e-06
0.002
1.91735e-06
0.003
3.23244e-06
0.005
1.28299e-07
2.20104e-07
response to stress
cell proliferation
3.39901e-06
3.66178e-06
0.001
0.001
0.005
0.005
GO term description
response to stimulus
negative regulation of biological process
cellular process
regulation of localization
response to organic substance
collagen metabolic process
negative regulation of cellular process
multicellular organismal macromolecule
metabolic process
interspecies interaction
negative regulation of apoptosis
Table 1: The GO enrichment result for cluster 1 identified by DPMMLM.
We used the Gene Ontology (GO, www.geneontology.org) to calculate the enrichment of functional
categories in each cluster based on the hypergeometric distribution. Genes in cluster 1 (Table 1)
are associated with immune and stress responses. Interestingly the most significant category for
this cluster is ?regulation of apoptosis? (corrected p-value <0.001). Indeed, both Salmonella and
Yersinia are known to induce apoptosis in host cells [20]. When clustering the two datasets independently the p-value for this category is greatly reduced indicating that accurate matchings can lead to
better identification of core pathways (see Appendix). Cluster 4 contains the most coherent set of
upregulated genes across the two species. One of top GO categories for this cluster is ?response to
molecule of bacterial origin? (corrected p-value < 0.001) which is the most accurate description of
the condition tested. See Appendix for complete GO tables of all clusters. In contrast to clusters in
which mouse and human genes are similarly expressed, cluster 3 genes are strongly upregulated in
human cells while not changing in mouse. This cluster is enriched for ribosomal proteins (corrected
p-value <0.001). This may indicate different strategies utilized by the bacteria in the two experiments. There are studies that show that pathogens can upregulate the synthesis of ribosomal genes
(which are required for translation) [21] whereas other studies indicate that ribosomal genes may not
change much, or may even be reduced, following infection [22]. The results of our analysis indicate
that while following Salmonella infection in human cells ribosomal genes are upregulated, they are
not activated following Yarsinia infection in mouse.
We have also analyzed the matchings obtained using sequence data alone (prior) and by combining
sequence and expression data (posterior) using our method. The top posterior gene is the same
as the top prior gene in most cases (905 of the 1476 human genes). However, there are several
cases in which the prior and posterior differ. 293 human genes are not matched to any mouse
gene in the cluster they are assigned to indicating that they are expressed in a species dependent
manner. Additionally, for 278 human genes the top posterior and prior mouse gene differ. To test
whether these differences inferred by the algorithm are biologically meaningful we compared our
Dirichlet method to a method that uses deterministic assignments, as was done in the past. Using
such assignments the algorithm identified only three clusters as shown in Figure 2(b). Neither of
these clusters looked homogenous across species.
5
Conclusions
We have developed a new model for simultaneously clustering and matching genes across species.
The model uses a Dirichlet Process to infer the number of clusters. We developed an efficient
variational inference method that scales to large datasets with almost 2000 datapoints. We have
also demonstrated the power of our method on simulated data and immune response dataset. While
the method was presented in the context of expression data it is general and can be used for other
matching tasks in which a prior can be obtained. For example, when trying to determine a caption
for images extracted from webpages a prior can be obtained by relying on the distance between the
image and the text on the page. Next, clustering can be employed to utilize the abundance of images
that are extracted and improve the matching outcome.
Acknowledgments
We thank the anonymous reviewers for constructive and insightful comments. This work is supported in part by NIH grant 1RO1 GM085022 and NSF grants DBI-0965316 and CAREER-0448453
to Z.B.J.
8
References
[1] M. Kellis, N. Patterson, M. Endrizzi, B. Birren, and E. S. Lander. Sequencing and comparison
of yeast species to identify genes and regulatory elements. Nature, 423:241?254, May 2003.
[2] L. J. Jensen, T. S. Jensen, U. de Lichtenberg, S. Brunak, and P. Bork. Co-evolution of transcriptional and post-translational cell-cycle regulation. Nature, 443:594?597, Oct 2006.
[3] G. Lelandais et al. Genome adaptation to chemical stress: clues from comparative transcriptomics in Saccharomyces cerevisiae and Candida glabrata. Genome Biol., 9:R164, 2008.
[4] G. Quon, Y. W. Teh, E. Chan, M. Brudno, T. Hughes, and Q. D. Morris. A mixture model
for the evolution of gene expression in non-homogeneous datasets. In Advances in Neural
Information Processing Systems, volume 21, 2009.
[5] J. M. Stuart, E. Segal, D. Koller, and S. K. Kim. A gene-coexpression network for global
discovery of conserved genetic modules. Science, 302:249?255, Oct 2003.
[6] Sven Bergmann, Jan Ihmels, and Naama Barkai. Similarities and differences in genome-wide
expression data of six organisms. PLoS Biol, 2(1):e9, 12 2003.
[7] Y. Lu, R. Rosenfeld, and Z. Bar-Joseph. Identifying cycling genes by combining sequence
homology and expression data. Bioinformatics, 22:e314?322, Jul 2006.
[8] Y. Lu, R. Rosenfeld, G. J. Nau, and Z. Bar-Joseph. Cross species expression analysis of innate
immune response. J. Comput. Biol., 17:253?268, Mar 2010.
[9] R. Sharan et al. Conserved patterns of protein interaction in multiple species. Proc. Natl. Acad.
Sci. U.S.A., 102:1974?1979, Feb 2005.
[10] Thomas S. Ferguson. A bayesian analysis of some nonparametric problems. The Annals of
Statistics, 1(2):209?230, 1973.
[11] J. Sethuraman. A constructive definition of dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[12] Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183?233,
November 1999.
[13] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and
variational inference. Found. Trends Mach. Learn., 1(1-2):1?305, 2008.
[14] H. Ishwaran and James. Gibbs sampling methods for stick breaking priors. Journal of the
American Statistical Association, pages 161?173, March 2001.
[15] Zoubin Ghahramani and Matthew J. Beal. Propagation algorithms for variational bayesian
learning. In In Advances in Neural Information Processing Systems 13, pages 507?513. MIT
Press, 2001.
[16] Sanjoy Dasgupta. Learning mixtures of gaussians. In FOCS ?99: Proceedings of the 40th
Annual Symposium on Foundations of Computer Science, Washington, DC, USA, 1999.
[17] M. Meila. Comparing clusterings by the variation of information. In Learning theory and
Kernel machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop,
COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003: proceedings, page 173.
Springer Verlag, 2003.
[18] C. S. Detweiler et al. Host microarray analysis reveals a role for the Salmonella response
regulator phoP in human macrophage cell death. Proc. Natl. Acad. Sci. U.S.A., 98:5850?5855,
May 2001.
[19] K. van Erp et al. Role of strain differences on host resistance and the transcriptional response
of macrophages to infection with Yersinia enterocolitica. Physiol. Genomics, 25:75?84, 2006.
[20] D. M. Monack, B. Raupach, et al. Salmonella typhimurium invasion induces apoptosis in
infected macrophages. Proc. Natl. Acad. Sci. U.S.A., 93:9833?9838, Sep 1996.
[21] O. O. Zharskaia et al. [Activation of transcription of ribosome genes following human embryo
fibroblast infection with cytomegalovirus in vitro]. Tsitologiia, 45:690?701, 2003.
[22] J. W. Gow, S. Hagan, P. Herzyk, C. Cannon, P. O. Behan, and A. Chaudhuri. A gene signature
for post-infectious chronic fatigue syndrome. BMC Med Genomics, 2:38, 2009.
9
| 4153 |@word briefly:1 covariance:5 p0:2 series:1 score:3 contains:1 genetic:1 interestingly:1 past:1 lichtenberg:1 blank:1 comparing:2 activation:1 physiol:1 concatenate:1 update:7 zik:2 alone:1 selected:3 core:5 provides:1 detecting:1 org:1 five:2 phylogenetic:1 unbounded:1 constructed:1 direct:1 beta:2 c2:1 symposium:1 fibroblast:1 focs:1 consists:1 fitting:2 pathway:1 autocorrelation:1 manner:1 introduce:1 x0:5 indeed:1 ontology:1 proliferation:1 relying:4 automatically:1 mx0:3 cluttering:1 provided:1 matched:14 notation:1 mitigated:1 bounded:1 factorized:1 maximizes:1 minimizes:1 developed:3 finding:1 multidimensional:1 decouples:1 wrong:1 k2:1 stick:4 grant:2 omit:1 positive:2 limit:2 acad:3 mach:1 analyzing:2 subscript:1 studied:1 challenging:2 co:1 limited:2 programmed:1 unique:2 acknowledgment:1 yj:15 practice:1 hughes:1 procedure:1 jan:1 area:1 drug:1 matching:13 projection:1 word:2 organic:1 regular:2 induce:1 protein:3 zoubin:2 get:1 marginalize:1 selection:2 context:1 applying:1 py:2 www:1 measurable:1 deterministic:6 equivalent:1 missing:1 maximizing:1 demonstrated:1 go:6 reviewer:1 starting:1 independently:1 chronic:1 formalized:1 identifying:3 rule:3 dbi:1 datapoints:5 fx:11 coordinate:3 variation:1 annals:1 construction:2 caption:2 exact:1 homogeneous:1 us:4 origin:1 pa:2 element:3 trend:1 utilized:1 hagan:1 observed:3 role:2 module:1 thousand:1 calculate:1 region:1 cycle:2 improper:1 plo:1 removed:1 highest:4 complexity:2 kmi:1 dynamic:1 signature:1 solving:1 upon:1 localization:1 patterson:1 matchings:17 sep:1 joint:1 separated:2 sven:1 describe:2 outcome:1 apoptosis:5 whose:1 solve:2 say:2 reconstruct:1 otherwise:1 ability:1 statistic:1 rosenfeld:2 beal:1 sequence:17 eigenvalue:1 analytical:1 interaction:2 adaptation:1 combining:3 chaudhuri:1 achieve:1 infectious:1 description:3 webpage:1 cluster:46 comparative:1 converges:1 coupling:3 develop:2 derive:1 eq:2 strong:1 c:2 indicate:3 differ:6 tommi:1 drawback:1 correct:10 human:21 vx:2 require:1 clustered:2 anonymous:1 biological:5 brudno:1 adjusted:4 m0i:1 considered:1 ground:1 normal:1 exp:2 lawrence:1 matthew:1 major:1 achieves:1 proc:3 xnx:1 label:1 largest:1 create:2 successfully:1 mit:1 gaussian:9 cerevisiae:1 avoid:1 cannon:1 factorizes:1 varying:1 jaakkola:1 derived:3 focus:1 vk:12 saccharomyces:1 sequencing:2 likelihood:8 greatly:1 contrast:1 sharan:1 kim:1 detect:1 inference:7 dependent:2 membership:4 lowercase:2 ferguson:1 hidden:3 koller:1 issue:2 translational:1 colt:1 ziv:1 heatmap:1 special:1 mutual:2 marginal:6 field:4 bacterial:1 homogenous:1 washington:2 sampling:1 bmc:1 represents:1 stuart:2 others:1 ymi:4 stimulus:2 roman:1 few:1 collagen:1 randomly:1 simultaneously:3 divergence:1 macromolecule:1 consisting:1 attempt:1 nau:1 highly:1 evaluation:1 mixture:23 analyzed:1 uppercase:2 activated:1 natl:3 accurate:2 edge:1 partial:1 bacteria:2 xy:4 tree:1 soft:4 infected:2 assignment:11 deviation:1 entry:6 successful:1 characterize:1 combined:1 probabilistic:1 michael:2 ihmels:1 connecting:1 mouse:18 ym:1 synthesis:1 choose:1 wishart:4 e9:1 resort:1 derivative:1 leading:2 american:1 potential:1 segal:1 singleton:1 de:1 coding:2 bold:2 includes:1 coefficient:1 vi:3 depends:1 invasion:1 view:1 jul:1 ni:3 accuracy:2 who:1 pny:3 identify:9 bayesian:4 identification:1 accurately:1 lu:3 researcher:5 datapoint:2 infection:6 definition:2 yny:1 james:1 associated:1 mi:15 static:1 dataset:4 treatment:2 popular:1 knowledge:2 formalize:1 follow:2 response:22 rand:4 done:1 strongly:2 mar:1 furthermore:1 correlation:1 propagation:1 yeast:1 innate:1 barkai:1 usa:4 requiring:1 y2:1 normalized:3 true:1 evolution:2 hence:2 analytically:1 chemical:2 assigned:1 homology:1 death:3 ribosome:1 gw:4 trying:1 fatigue:1 stress:3 bergmann:2 demonstrate:1 complete:1 performs:1 image:4 variational:11 nih:1 common:1 functional:2 multinomial:1 vitro:1 macrophage:3 volume:1 discussed:1 organism:1 association:1 mellon:2 measurement:1 significant:1 gibbs:1 meila:1 similarly:3 immune:9 resistant:1 similarity:3 etc:1 feb:1 posterior:15 own:1 recent:2 multivariate:2 chan:1 termed:1 certain:1 verlag:1 inequality:1 binary:2 conserved:4 seen:2 greater:1 employed:1 syndrome:1 determine:1 multiple:5 infer:4 match:17 cross:7 yjt:1 host:3 post:2 coexpression:1 cmu:2 expectation:2 ifn:1 represent:2 salmonella:6 kernel:3 cell:10 addition:4 whereas:2 addressed:2 decreased:1 lander:1 grow:1 microarray:2 operate:1 unlike:1 rest:2 ascent:3 comment:1 med:1 orthologs:3 jordan:2 enough:1 zi:16 restrict:1 identified:3 regarding:1 microarrays:1 whether:2 expression:31 six:1 transcriptomics:1 suffer:1 resistance:1 useful:3 generally:1 amount:1 nonparametric:4 morris:1 concentrated:1 induces:1 category:4 reduced:3 generate:1 percentage:2 nsf:1 per:3 discrete:4 carnegie:2 ortholog:2 write:1 hyperparameter:1 dasgupta:1 four:1 monitor:1 drawn:2 changing:1 erp:1 neither:1 thresholded:1 utilize:3 letter:2 extends:1 family:2 place:1 almost:1 separation:1 appendix:2 bit:3 bound:4 display:1 annual:2 activity:1 constrain:1 x2:1 flat:1 encodes:1 regulator:1 expanded:1 px:2 martin:1 department:2 according:1 march:1 conjugate:5 across:14 remain:1 son:1 increasingly:1 nmi:2 slightly:1 ro1:1 joseph:3 biologically:1 embryo:1 discus:1 argmaxj:1 tractable:1 available:1 gaussians:4 ishwaran:1 alternative:1 existence:1 thomas:1 top:8 dirichlet:18 clustering:11 denotes:2 include:1 graphical:4 ghahramani:2 classical:1 kellis:1 g0:8 question:1 added:2 looked:2 font:2 parametric:1 strategy:1 dependence:1 diagonal:2 hai:1 transcriptional:2 regulated:3 dp:8 mx:3 distance:1 link:1 thank:1 simulated:6 cycling:1 sci:3 raupach:1 nx:21 fy:5 cellular:4 index:4 difficult:1 unfortunately:1 mostly:1 regulation:11 sinica:1 negative:5 summarization:1 dpmm:6 perform:1 allowing:2 unknown:1 teh:1 observation:1 markov:1 datasets:7 november:1 strain:1 y1:2 dc:2 arbitrary:1 august:1 inferred:4 enrichment:2 pair:4 required:1 kl:1 optimized:1 coherent:2 hypergeometric:1 address:1 able:2 bar:3 suggested:1 usually:1 pattern:6 below:1 wy:1 including:2 max:4 wainwright:1 power:1 homologous:1 indicator:1 improve:2 sethuraman:2 genomics:2 text:1 prior:27 discovery:1 proven:1 validation:1 foundation:1 integrate:1 metabolic:2 translation:1 row:4 repeat:1 last:1 truncation:2 supported:1 profiled:1 side:1 allow:1 understand:1 wide:1 saul:1 taking:1 naama:1 sparse:1 distributed:2 van:1 dimension:3 world:1 gram:1 genome:3 autoregressive:1 clue:1 quon:2 cytoplasm:1 transcription:1 gene:69 global:1 reveals:1 pittsburgh:2 conservation:1 orthologous:1 assumed:1 xi:26 latent:7 regulatory:1 why:1 table:3 brunak:1 additionally:1 learn:1 mj:3 nature:2 molecule:1 decoupling:1 career:1 improving:1 complex:1 constructing:1 domain:1 vj:1 pk:1 statistica:1 noise:4 profile:2 allowed:1 interspecies:1 x1:2 enriched:1 referred:1 ny:15 precision:1 explicit:1 exponential:3 comput:1 breaking:4 abundance:1 down:3 formula:1 specific:4 substance:1 insightful:1 jensen:3 symbol:1 list:1 divergent:7 exists:1 intractable:3 workshop:1 pathogen:2 multicellular:1 ribosomal:4 sx:1 led:2 infinitely:1 expressed:4 binding:1 springer:1 mij:2 truth:1 dh:1 extracted:2 oct:2 conditional:5 goal:2 change:2 infinite:1 except:1 corrected:5 specie:49 sanjoy:1 meaningful:1 indicating:2 select:1 phylogeny:2 bioinformatics:1 constructive:2 tested:2 biol:3 |
3,483 | 4,154 | Discriminative Clustering by Regularized
Information Maximization
Ryan Gomes
[email protected]
Andreas Krause
[email protected]
Pietro Perona
[email protected]
California Institute of Technology
Pasadena, CA 91106
Abstract
Is there a principled way to learn a probabilistic discriminative classifier from an
unlabeled data set? We present a framework that simultaneously clusters the data
and trains a discriminative classifier. We call it Regularized Information Maximization (RIM). RIM optimizes an intuitive information-theoretic objective function which balances class separation, class balance and classifier complexity. The
approach can flexibly incorporate different likelihood functions, express prior assumptions about the relative size of different classes and incorporate partial labels
for semi-supervised learning. In particular, we instantiate the framework to unsupervised, multi-class kernelized logistic regression. Our empirical evaluation
indicates that RIM outperforms existing methods on several real data sets, and
demonstrates that RIM is an effective model selection method.
1
Introduction
Clustering algorithms group data items into categories without requiring human supervision or definition of categories. They are often the first tool used when exploring new data. A great number
of clustering principles have been proposed, most of which can be described as either generative
or discriminative in nature. Generative clustering algorithms provide constructive definitions of
categories in terms of their geometric properties in a feature space or as statistical processes for
generating data. Examples include k-means and Gaussian mixture model clustering. In order for
generative clustering to be practical, restrictive assumptions must be made about the underlying
category definitions.
Rather than modeling categories explicitly, discriminative clustering techniques represent the
boundaries or distinctions between categories. Fewer assumptions about the nature of categories
are made, making these methods powerful and flexible in real world applications. Spectral graph
partitioning [1] and maximum margin clustering [2] are example discriminative clustering methods.
A disadvantage of existing discriminative approaches is that they lack a probabilistic foundation,
making them potentially unsuitable in applications that require reasoning under uncertainty or in
data exploration.
We propose a principled probabilistic approach to discriminative clustering, by formalizing the
problem as unsupervised learning of a conditional probabilistic model. We generalize the work of
Grandvalet and Bengio [3] and Bridle et al. [4] in order to learn probabilistic classifiers that are
appropriate for multi-class discriminative clustering, as explained in Section 2. We identify two
fundamental, competing quantities, class balance and class separation, and develop an information
theoretic objective function which trades off these quantities. Our approach corresponds to
maximizing mutual information between the empirical distribution on the inputs and the induced
1
label distribution, regularized by a complexity penalty. Thus, we call our approach Regularized
Information Maximization (RIM).
In summary, our contribution is RIM, a probabilistic framework for discriminative clustering with
a number of attractive properties. Thanks to its probabilistic formulation, RIM is flexible: it is
compatible with diverse likelihood functions and allows specification of prior assumptions about
expected class proportions. We show how our approach leads to an efficient, scalable optimization
procedure that also provides a means of automatic model selection (determination of the number
of clusters). RIM is easily extended to semi-supervised classification. Finally, we show that RIM
performs better than competing approaches on several real-world data sets.
2
Regularized Information Maximization
Suppose we are given an unlabeled dataset of N feature vectors (datapoints) X = (x1 , ? ? ? , xN ),
where xi = (xi1 , . . . , xiD )T ? RD are D-dimensional vectors with components xid . Our goal is
to learn a conditional model p(y|x, W) with parameters W which predicts a distribution over label
values y ? {1, . . . , K} given an input vector x.
Our approach is to construct a functional F (p(y|x, W); X, ?) which evaluates the suitability of
p(y|x, W) as a discriminative clustering model. We then use standard discriminative classifiers
such as logistic regression for p(y|x, W), and maximize the resulting function F (W; X, ?) over
the parameters W. ? is an additional tuning parameter that is fixed during optimization.
We are guided by three principles when constructing F (p(y|x, W); X, ?). The first is that the discriminative model?s decision boundaries should not be located in regions of the input space that are
densely populated with datapoints. This is often termed the cluster assumption [5], and also corresponds to the idea that datapoints should be
Pclassified with large margin. Grandvalet & Bengio [3]
show that a conditional entropy term ? N1 i H{p(y|xi , W)} very effectively captures the cluster
assumption when training probabilistic classifiers with partial labels. However, in the case of fully
unsupervised learning this term alone is not enough to ensure sensible solutions, because conditional
entropy may be reduced by simply removing decision boundaries and unlabeled categories tend to
be removed. We illustrate this in Figure 1 (left) with an example using the multilogit regression
classifier as the conditional model p(y|x, W), which we will develop in Section 3.
In order to avoid degenerate solutions, we incorporate the notion of class balance: we prefer configurations in which category labels are assigned evenly across the dataset. We define the empirical
label distribution
Z
1 X
p?(y; W) = p?(x)p(y|x, W)dx =
p(y|xi , W),
N i
which is an estimate of the marginal distribution of y. A natural way to encode our preference
towards class balance is to use the entropy H{?
p(y; W)}, because it is maximized when the labels
are uniformly distributed. Combining the two terms, we arrive at
1 X
IW {y; x} = H{?
p(y; W)}?
H{p(y|xi , W)}
(1)
N i
which is the empirical estimate of the mutual information between x and y under the conditional
model p(y|x, W).
Bridle et al. [4] were the first to propose maximizing IW {y; x} in order to learn probabilistic classifiers without supervision. However, they note that IW {y; x} may be trivially maximized by a conditional model that classifies each data point xi into its own category yi , and that classifiers trained
with this objective tend to fragment the data into a large number of categories, see Figure 1 (center).
We therefore introduce a regularizing term R(W; ?) whose form will depend on the specific choice
of p(y|x, W). This term penalizes conditional models with complex decision boundaries in order
to yield sensible clustering solutions. Our objective function is
F (W; X, ?) = IW {y; x} ? R(W; ?)
(2)
and we therefore refer to our approach as Regularized Information Maximization (RIM), see Figure 1
(right). While we motivated this objective with notions of class balance and seperation, our approach
may be interpreted as learning a conditional distribution for y that preserves information from the
data set, subject to a complexity penalty.
2
Bridle et al. [4]
2
1
1
1
0
0
?1
?1
?2
?2
?2
?2
?1
0
x1
1
2
x2
2
2
?1
0
x1
1
0.3
0.2
?1
0
x1
1
2
1
2
1
1
0.8
x2
0.6
0
0.4
0.4
0.2
?1
?2
?2
0
?2
?2
?1
0
x1
1
2
0.6
0
?1
0.1
?1
0
x1
2
1
x2
0.4
?1
0.8
0.5
0
?2
?2
?2
?2
2
2
1
0
?1
0.6
x2
Cond. Entropy
RIM
2
x2
x2
Decision Regions
Grandvalet & Bengio [3]
0.2
?1
0
x1
1
2
Figure 1: Example unsupervised multilogit regression solutions on a simple dataset with three clusters. The top and bottom rows show the category label arg maxy p(y|x, W) and conditional entropy
H{p(y|x, W)} at each point x, respectively. We find that both class balance and regularization
terms are necessary to learn unsupervised classifiers suitable for multi-class clustering.
3
Example application: Unsupervised Multilogit Regression
The RIM framework is flexible in the choice of p(y | x; W) and R(W; ?). As an example instantiation, we here choose multiclass logistic regression as the conditional model. Specifically, if K is
the maximum number of classes, we choose
X
wkT wk ,
(3)
p(y = k|x, W) ? exp(wkT x + bk ) and R(W; ?) = ?
k
where the set of parameters W = {w1 , . . . , wK ; b1 , . . . , bK } consists of weight vectors wk and
bias values bk for each class k. Each weight vector wk ? RD is D-dimensional with components
wkd . The regularizer is the squared L2 norm of the weight vectors, and may be interpreted as an
isotropic normal distribution prior on the weights W. The bias terms are not penalized.
In order to optimize Eq. 2 specialized with Eqs. 3, we require the gradients of the objective function.
For clarity, we define pki ? p(y = k|xi , W), and p?k ? p?(y = k; W). The partial derivatives are
?F
1 X ?pci
pci
?F
1 X ?pci
pci
=
log
? 2?wkd and
=
log
.
(4)
?wkd
N ic ?wkd
p?c
?bk
N ic ?bk
p?c
Naive computation of the gradient requires O(N K 2 D), since there are K(D + 1) parameters and
each derivative requires a sum over N K terms. However, the form of the conditional probability
derivatives for multi-logit regression are:
?pci
?pci
= (?kc ? pci )pki xid and
= (?kc ? pci )pki ,
?wkd
?bk
where ?kc is equal to one when indices k and c are equal, and zero otherwise. When these expressions are substituted into Eq. 4, we find the following expressions:
?F
1 X
pki X
pci
xid pki log
=
?
pci log
? 2?wkd
(5)
?wkd N i
p?k
p?c
c
?F
1 X
pki X
pci
=
pki log
?
pci log
?bk
N i
p?k
p?c
c
P
Computing the gradient requires only O(N KD) operations since the terms c pci log pp?cic may be
computed once and reused in each partial derivative expression.
The above gradients are used in the L-BFGS [6] quasi-Newton optimization algorithm1 . We find empirically that the optimization usually converges within a few hundred iterations. When specialized
1
We used Mark Schmidt?s implementation at http://www.cs.ubc.ca/?schmidtm/Software/
minFunc.html.
3
Class Probabilities
Bias
Weight Vector Norms
20
0.4
0
?10
0.2
0
0
wTk wk
10
bk
0.6
15
10
5
?20
?30
0
20
40
Class Index
20
40
Class Index
0
0
20
40
Class Index
Figure 2: Demonstration of model selection on the toy problem from Figure 1. The algorithm is
initialized with 50 category weight vectors wk . Upon convergence, only three of the categories
are populated with data examples. The negative bias terms of the unpopulated categories drive the
unpopulated class probabilities p?k towards zero. The corresponding weight vectors wk have norms
near zero.
to multilogit regression, the objective function F (W; x, ?) is non-concave. Therefore the algorithm
can only be guaranteed to halt at locally optimal stationary points of F . In Section 3.1, we explain
how we can obtain an initialization that is robust against local optima.
3.1 Model Selection
Setting the derivatives (Eq. 5) equal to zero yields the following condition at stationary points of F :
X
0
wk =
?ki
xi
(6)
i
where we have defined
0
?ki
?
1
pci
pki X
?
pci log
.
pki log
2?N
p?k
p?c
c
(7)
The L2 regularizing function R(W;
P ?)0 in0 Eq.T 3 is additively composed of penalty terms associated
?kj xi xj . It is instructive to observe the limiting behavior
with each category: wkT wk = ij ?ki
T
of P
the penalty term wk wk when datapoints are not assigned to category k; that is, when p?k =
1
0
0. This implies that pki ? 0 for all i, and therefore ?ki
? 0 for all i. Finally,
i pki ?
N
P
T
0
0
T
x
?
0.
This
means
that
the
regularizing
function
does not penalize
x
?
wk wk =
?
j
ij ki kj i
unpopulated categories.
We find empirically that when we initialize with a large number of category weights wk , many decay away depending on the value of ?. Typically as ? increases, fewer categories are discovered.
This may be viewed as model selection (automatic determination of the number of categories) since
the regularizing function and parameter ? may be interpreted as a form of prior on the weight parameters. The bias terms bk are unpenalized and are adjusted during optimization to drive the class
probablities p?k arbitrarily close to zero for unpopulated classes. This is illustrated in Figure 2.
This behavior suggests an effective initialization procedure for our algorithm. We first oversegment
the data into a large number of clusters (using k-means or other suitable algorithm) and train a
supervised multi-logit classifier using these cluster labels. (This initial classifier may be trained with
a small number of L-BFGS iterations since it only serves as a starting point.) We then use this
classifier as the starting point for our RIM algorithm and optimize with different values of ? in order
to obtain solutions with different numbers of clusters.
4
Example Application: Unsupervised Kernel Multilogit Regression
The stationary conditions have another interesting consequence. Equation 6 indicates that at stationary points, the weights are located in the span of the input datapoints. We use
P this insight as
justification to define explicit coefficients ?ki and enforce the constraint wk = i ?ki xi during
optimization. Substituting this equation into P
the multilogit regression conditional likelihood allows
replacement of all inner products wkT x with i ?ki K(xi , x), where K is a positive definite kernel
function that evaluates the inner product xTi x. The
conditional model now
X
has the form
p(y = k|x, ?, b) ? exp
?ki K(xi , x) + bk .
i
4
P
Substituting the constraint into the regularizing function k wkT wk yields
Pa natural replacement of
T
wk wk by the Reproducing Hilbert Space (RKHS) norm of the function i ?ki K(xi , ?):
XX
R(?) =
?ki ?kj K(xi , xj ).
(8)
k
ij
We use the L-BFGS algorithm to optimize the kernelized algorithm over the coefficients ?ki and
biases bk . The partial derivatives for the kernel coefficients are
X
?F
1 X
pki X
pci
=
K(xj , xi )pki log
?
pci log
? 2?
?ki K(xj , xi )
??kj
N i
p?k
p?c
c
i
and the derivatives for the biases are unchanged. The gradient of the kernelized algorithm requires
O(KN 2 ) to compute. Kernelized unsupervised multilogit regression exhibits the same model
selection behavior as the linear algorithm.
5
Extensions
We now discuss how RIM can be extended to semi-supervised classification, and to encode prior
assumptions about class proportions.
5.1
Semi-supervised Classification
In semi-supervised classification, we assume that there are unlabeled examples XU =
U
L
L
L
{xU
1 , ? ? ? , xN } as well as labeled examples X = {x1 , ? ? ? , xM } with labels Y = {y1 , ? ? ? , yM }.
We again use mutual information IW {y; x} (Eq. 1) to define the relationship between unlabeled
points and the model parameters, but we incorporate an additional parameter ? which will define
the tradeoff between labeled and unlabeled examples. The conditional likelihood is incorporated for
labeled examples to yield the semi-supervised objective:
X
S(W; ?, ?) =? IW {y; x} ? R(W; ?) +
log p(yi |xL
i , W)
i
The gradient is computed and again used in the L-BFGS algorithm in order to optimize this combined objective. Our approach is related to the objective in [3], which does not contain the class
balance term H(?
p(y; W)).
5.2
Encoding Prior Beliefs about the Label Distribution
So far, we have motivated our choice for the objective function F through the notion of class balance.
However, in many classification tasks, different classes have different number of members. In the
following, we show how RIM allows flexible expression of prior assumptions about non-uniform
class label proportions.
First, note that the following basic identity holds
H{?
p(y; W)} = log(K) ? KL{?
p(y; W)||U }
(9)
where U is the uniform distribution over the set of labels {1, ? ? ? , K}. Substituting the identity, then
dropping the constant log(K) yields another interpretation of the objective
1 X
F (W; X, ?) = ?
H{p(y|xi , W)} ? KL{?
p(y; W)||U } ? R(W; ?).
(10)
N i
The term ?KL{?
p(y; W)||U } is maximized when the average label distribution is uniform. We
can capture prior beliefs about the average label distribution by substituting a reference distribution
D(y; ?) in place of U (? is a parameter that may be fixed or optimized during learning). [7] also use
relative entropy as a means of enforcing prior beliefs, although not with respect to class distributions
in multi-class classification problems.
This construction may be used in a clustering task in which we believe that the cluster sizes obey
a power law distribution as, for example, considered by [8] who use the Pitman-Yor process for
nonparametric language modeling. Simple manipulation yields the following objective:
F (W; X, ?, ?) = IW {x; y} ? H{?
p(y; W)||D(y; ?)} ? R(W; ?)
P
where H{?
p(y; W)||D(y; ?)} is the cross entropy ? k p?(y = k; W) log D(y = k; ?). We therefore find that label distribution priors may be incorporated using an additional cross entropy regularization term.
5
Caltech Images
0.2
0.06
RIM
RIM
Adjusted Rand Index
0.8
k?means
0.7
SC
0.6
0.5
0.4
0.05
0.15
Adjusted Rand Index
0.9
Adjusted Rand Index
NCI109 Graphs
DandD Graphs
1
k?means
0.1
MMC
0.05
0
0.3
0.2
2
4
6
8
10
12
# of clusters
14
16
?0.05
2
4
6
8 10 12 14
# of clusters
k?means
0.03
0.02
0.01
SC
MMC
RIM
0.04
MMC
16
18
20
0
2
4
6
8
SC
10 12 14 16 18 20 22
# of clusters
Figure 3: Unsupervised Clustering: Adjusted Rand Index (relative to ground truth) versus number
of clusters.
6
Experiments
We empirically evaluate our RIM approach on several real data sets, in both fully unsupervised and
semisupervised configurations.
6.1 Unsupervised Learning
Kernelized RIM is initialized according to the procedure outlined in Section 3.1, and run until LBFGS converges. Unlabeled examples are then clustered according to arg maxk p(y = k|x, W).
We compare RIM against the spectral clustering (SC) algorithm of [1], the fast maximum margin
clustering (MMC) algorithm of [9], and kernelized k-means [10]. MMC is a binary clustering algorithm. We use the recursive scheme outlined by [9] to extend the approach to multiple categories.
The MMC algorithm requires an initial clustering estimate for initialization, and we use SC to provide this.
We evaluate unsupervised clustering performance in terms of how well the discovered clusters reflect
known ground truth labels of the dataset. We report the Adjusted Rand Index (ARI) [11] between an
inferred clustering and the ground truth categories. ARI has a maximum value of 1 when two clusterings are identical. We evaluated a number of other measures for comparing clusterings to ground
truth including mutual information, normalized mutual information [12], and cluster impurity [13].
We found that the relative rankings of the algorithms were the same as indicated by ARI.
We evaluate the performance of each algorithm while varying the number of clusters that are discovered, and we plot ARI for each setting. For SC and k-means the number of clusters is given as
an input parameter. MMC is evaluated at {2, 4, 8, ? ? ? } clusters (powers of two, due to the recursive
scheme.) For RIM, we sweep the regularization parameter ? and allow the algorithm to discover the
final number of clusters.
Image Clustering. We test the algorithms on an image clustering task with 350 images from four
Caltech-256 [14] categories (Faces-Easy, Motorbikes, Airplanes, T-Shirt) for a total of N = 1400
images. We use the Spatial Pyramid Match kernel [15] computed between every pair of images.
4
We sweep RIM?s ? parameter across [ 0.125
N , N ]. The results are summarized in figure 3. Overall,
the clusterings that best match ground truth are given by RIM when it discovers four clusters. We
find that RIM outperforms both SC and MMC at all settings. RIM outperforms kernelized k-means
when discovering between 4 and 8 clusters. Their performances are comparable for other numbers
of clusters. Figure 4 shows example images taken from clusters discovered by RIM. Our RIM
implementation takes approximately 110 seconds per run on the Caltech Images datset on a quad
core Intel Xeon server. SC requires 38 seconds per run, while MMC requires 44-51 seconds per run
depending on the number of clusters specified.
Molecular Graph Clustering. We further test RIM?s unsupervised learning performance on two
molecular graph datasets. D&D [16] contains N = 1178 protein structure graphs with binary
ground truth labels indicating whether or not they function as enzymes. NCI109 [17] is composed
of N = 4127 compounds labeled according to whether or not they are active in an anti-cancer
screening. We use the subtree kernel developed by [18] with subtree height of 1. For D&D, we
0.05
sweep RIM?s lambda parameter through the range [ 0.001
N , N ] and for NCI we sweep through the
0.001 1
interval [ N , N ]. Results are summarized in Figures 3 (center and right). We find that of all
methods, RIM produces the clusterings that are nearest to ground truth (when discovering 2 clusters
6
Classification Performance
0.95
C1
Test Accuracy
0.9
C2
0.85
RIM
0.8
Grandvalet & Bengio
0.75
0.7
C3
C4
Supervised
0.65
0
C5
50
100
Number of labeled examples
150
Figure 4: Left: Randomly chosen example images from clusters discovered by unsupervised RIM
on Caltech Image. Right: Semi-supervised learning on Caltech Images.
Average Waveform
Most Uncertain Waveform
Figure 5: Left, Tetrode dataset average waveform. Right, the waveform with the most uncertain
cluster membership according to the classifier learned by RIM.
for D&D and 5 clusters for NCI109). RIM outperforms both SC and MMC at all settings. RIM has
the advantage over k-means when discovering a small number of clusters and is comparable at other
settings. On NCI109, RIM required approximately 10 minutes per run. SC required approximately
13 minutes, while MMC required on average 18 minutes per run.
Neural Tetrode Recordings. We demonstrate RIM on a large scale data set of 319, 209 neural
activity waveforms recorded from four co-located electrodes implanted in the hippocampus of a
behaving rat. The waveforms are composed of 38 samples from each of the four electrodes and are
the output of a neural spike detector which aligns signal peaks to the 13-th sample, see the average
waveform in Figure 5 (left). We concatenate the samples into a single 152-dimensional vector and
preprocess by subtracting the mean waveform and divide each vector component by its variance.
We use the linear RIM algorithm given in Section 3, initialized with 100 categories. We set ? to N4
and RIM discovers 33 clusters and finishes in 12 minutes. There is no ground truth available for this
dataset, but we use it to demonstrate RIM?s efficacy as a data exploration tool. Figure 6 shows two
clusters discovered by RIM. The top row consists of cluster member waveforms superimposed on
each other, with the cluster?s mean waveform plotted in red. We find that the clustered waveforms
have substantial similarity to each other. Taken as a whole, the clusters give an idea of the typical
waveform patterns. The bottom row shows the learned classifier?s discriminative weights wk for
each category, which can be used to gain a sense for how the cluster?s members differ from the
dataset mean waveform. We can use the probabilistic classifier learned by RIM to discover atypical
waveforms by ranking them according to their conditional entropy H{p(y|xi , W)}. Figure 5 (right)
shows the waveform whose cluster membership is most uncertain.
Cluster 2
Wts. Wave
Cluster 1
Figure 6: Two clusters discovered by RIM on the Tetrode data set. Top row: Superimposed
waveform members, with cluster mean in red. Bottom row: The discriminative category weights
wk associated with each cluster.
7
6.2 Semi-supervised Classification
We test our semi-supervised classification method described in Section 5.1 against [3] on the Caltech Images dataset. The methods were trained using both unlabeled and labeled examples, and
classification performance is assessed on the unlabeled portion. As a baseline, a supervised classifier was trained on labeled subsets of the data and tested on the remainder. Parameters were selected
via cross-validation on a subset of the labeled examples. The results are summarized in Figure 4.
We find that both semi-supervised methods significantly improve classification performance relative to the supervised baseline when the number of labeled examples is small. Additionally, we
find that RIM outperforms Grandvalet & Bengio. This suggests that incorporating prior knowledge
about class size distributions (in this case, we use a uniform prior) may be useful in semi-supervised
learning.
7
Related Work
Our work has connections to existing work in both unsupervised learning and semi-supervised classification.
Unsupervised Learning. The information bottleneck method [19] learns a conditional model
p(y|x) where the labels y form a lossy representation of the input space x, while preserving information about a third ?relevance? variable z. The method maximizes I(y; z) ? ?I(x; y), whereas
we maximize the information between y and x while constraining complexity with a parametric
regularizer. The method of [20] aims to maximize a similarity measure computed between members
within the same cluster while penalizing the mutual information between the cluster label y and the
input x. Again, mutual information is used to enforce a lossy representation of y|x. Song et al. [22]
also view clustering as maximization of the dependence between the input variable and output label variable. They use the Hilbert-Schmidt Independence Criterion as a measure of dependence,
whereas we use Mutual Information.
There is also an unsupervised variant of the Support Vector Machine, called max-margin clustering. Like our approach, the works of [2] and [21] use notions of class balance, seperation, and
regularization to learn unsupervised discriminative classifiers. However, they are formulated in the
max-margin framework rather than our probabilistic approach. Ours appears more amenable to
incorporating prior beliefs about the class labels. Unsupervised SVMs are solutions to a convex
relaxation of a non-convex problem, while we directly optimize our non-convex objective. The
semidefinite programming methods required are much more expensive than our approach.
Semi-supervised Classification. Our semi-supervised objective is related to [3], as discussed in
section 5.1. Another semi-supervised method [23] uses mutual information as a regularizing term to
be minimized, in contrast to ours which attempts to maximize mutual information. The assumption
underlying [23] is that any information between the label variable and unlabeled examples is an
artifact of the classifier and should be removed. Our method encodes the opposite assumption:
there may be variability (e.g. new class label values) not captured by the labeled data, since it is
incomplete.
8
Conclusions
We considered the problem of learning a probabilistic discriminative classifier from an unlabeled
data set. We presented Regularized Information Maximization (RIM), a probabilistic framework
for tackling this challenge. Our approach consists of optimizing an intuitive information theoretic
objective function that incorporates class separation, class balance and classifier complexity, which
may be interpreted as maximizing the mutual information between the empirical input and implied
label distributions. The approach is flexible, in that it allows consideration of different likelihood
functions. It also naturally allows expression of prior assumptions about expected label proportions
by means of a cross-entropy with respect to a reference distribution. Our framework allows
natural incorporation of partial labels for semi-supervised learning. In particular, we instantiate the
framework to unsupervised, multi-class kernelized logistic regression. Our empirical evaluation
indicates that RIM outperforms existing methods on several real data sets, and demonstrates that
RIM is an effective model selection method.
Acknowledgements
We thank Alex Smola for helpful comments and discussion, and Thanos Siapas for providing the neural tetrode
data. This research was partially supported by NSF grant IIS-0953413, a gift from Microsoft Corporation, and
ONR MURI Grant N00014-06-1-0734.
8
References
[1] A. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS,
2001.
[2] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In AAAI, 2005.
[3] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS,
2004.
[4] John S. Bridle, Anthony J. R. Heading, and David J. C. MacKay. Unsupervised classifiers,
mutual information and ?phantom targets?. In John E. Moody, Steve J. Hanson, and Richard P.
Lippmann, editors, Advances in Neural Information Processing Systems, volume 4, pages
1096?1101. Morgan Kaufmann Publishers, Inc., 1992.
[5] Olivier Chapelle and Alexander Zien. Semi-supervised classification by low density separation, September 2004.
[6] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization.
Mathematical Programming, 45:503?528, 1989.
[7] T. Jaakkola, M. Meila, and T. Jebara. Maximum entropy discrimination. In NIPS, 1999.
[8] Y. W. Teh. A hierarchical bayesian language model based on pitman-yor processes. In ACL,
2006.
[9] K. Zhang, I. W. Tsang, and J. T. Kwok. Maximum margin clustering made practical. In ICML,
2007.
[10] John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge
University Press, New York, NY, USA, 2004.
[11] Lawrence Hubert and Phipps Arabie. Comparing partitions. Journal of Classification, 2:193?
218, 1985.
[12] Alexander Strehl and Joydeep Ghosh. Cluster ensembles ? A knowledge reuse framework for
combining multiple partitions. Journal of Machine Learning Research, 3:583?617, 2002.
[13] Y. Chen, J. Ze Wang, and R. Krovetz. CLUE: cluster-based retrieval of images by unsupervised
learning. IEEE Trans. Image Processing, 14(8):1187?1201, 2005.
[14] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report
7694, California Institute of Technology, 2007.
[15] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for
recognizing natural scene categories. In CVPR, 2006.
[16] P. D. Dobson and A. J. Doig. Distinguishing enzyme structures from non-enzymes without
alignments. J. Mol. Biol., 330:771?783, Jul 2003.
[17] Nikil Wale and George Karypis. Comparison of descriptor spaces for chemical compound
retrieval and classification. In ICDM, pages 678?689, 2006.
[18] N. Shervashidze and K. M. Borgwardt. Fast subtree kernels on graphs. In NIPS, 2010.
[19] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. CoRR,
physics/0004057, 2000.
[20] N. Slonim, G. S. Atwal, G. Tkacik, and W. Bialek. Information-based clustering. Proc Natl
Acad Sci U S A, 102(51):18297?18302, December 2005.
[21] Francis Bach and Za??d Harchaoui. DIFFRAC: a discriminative and flexible framework for
clustering. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, NIPS.
MIT Press, 2007.
[22] Le Song, Alex Smola, Arthur Gretton, and Karsten M. Borgwardt. A dependence maximization
view of clustering. In ICML ?07: Proceedings of the 24th international conference on Machine
learning, pages 815?822, New York, NY, USA, 2007. ACM.
[23] A. Corduneanu and T. Jaakkola. On information regularization. In UAI, 2003.
9
| 4154 |@word proportion:4 norm:4 logit:2 hippocampus:1 reused:1 unpopulated:4 additively:1 tkacik:1 initial:2 configuration:2 contains:1 fragment:1 efficacy:1 liu:1 rkhs:1 ours:2 outperforms:6 existing:4 comparing:2 tackling:1 dx:1 must:1 john:4 concatenate:1 partition:2 plot:1 discrimination:1 alone:1 generative:3 instantiate:2 fewer:2 item:1 stationary:4 discovering:3 selected:1 isotropic:1 core:1 provides:1 preference:1 wkd:7 zhang:1 daphne:1 height:1 mathematical:1 c2:1 consists:3 wale:1 introduce:1 expected:2 karsten:1 behavior:3 multi:8 shirt:1 xti:1 quad:1 gift:1 classifies:1 underlying:2 formalizing:1 xx:1 discover:2 maximizes:1 interpreted:4 developed:1 ghosh:1 corporation:1 every:1 concave:1 classifier:22 demonstrates:2 platt:1 partitioning:1 grant:2 positive:1 phipps:1 local:1 slonim:1 consequence:1 acad:1 encoding:1 approximately:3 acl:1 initialization:3 suggests:2 co:1 limited:1 range:1 karypis:1 practical:2 recursive:2 definite:1 procedure:3 empirical:6 significantly:1 matching:1 protein:1 unlabeled:11 selection:7 close:1 optimize:5 www:1 phantom:1 center:2 maximizing:3 flexibly:1 starting:2 convex:3 insight:1 datapoints:5 notion:4 justification:1 limiting:1 construction:1 suppose:1 target:1 programming:2 olivier:1 us:1 distinguishing:1 pa:1 expensive:1 ze:1 located:3 predicts:1 labeled:10 muri:1 bottom:3 wang:1 capture:2 tsang:1 region:2 trade:1 removed:2 principled:2 substantial:1 complexity:5 instructive:1 cristianini:1 arabie:1 trained:4 depend:1 impurity:1 upon:1 easily:1 regularizer:2 train:2 fast:2 effective:3 sc:10 pci:17 shervashidze:1 whose:2 cvpr:1 otherwise:1 final:1 advantage:1 propose:2 subtracting:1 product:2 remainder:1 combining:2 degenerate:1 roweis:1 intuitive:2 convergence:1 cluster:45 optimum:1 electrode:2 produce:1 generating:1 converges:2 object:1 mmc:11 illustrate:1 develop:2 depending:2 nearest:1 ij:3 eq:6 c:1 implies:1 differ:1 guided:1 waveform:16 exploration:2 human:1 seperation:2 xid:4 require:2 clustered:2 suitability:1 ryan:1 adjusted:6 exploring:1 extension:1 cic:1 hold:1 considered:2 ic:2 normal:1 exp:2 great:1 lawrence:1 ground:8 substituting:4 proc:1 bag:1 label:27 iw:7 tool:2 minimization:1 mit:1 gaussian:1 aim:1 pki:13 rather:2 avoid:1 varying:1 jaakkola:2 encode:2 ponce:1 likelihood:5 indicates:3 superimposed:2 contrast:1 baseline:2 sense:1 helpful:1 membership:2 typically:1 perona:3 pasadena:1 kernelized:8 kc:3 quasi:1 koller:1 arg:2 classification:16 flexible:6 html:1 overall:1 spatial:2 initialize:1 mutual:12 marginal:1 equal:3 construct:1 once:1 mackay:1 ng:1 identical:1 unsupervised:23 icml:2 minimized:1 report:2 richard:1 few:1 randomly:1 composed:3 simultaneously:1 densely:1 preserve:1 replacement:2 krovetz:1 n1:1 microsoft:1 attempt:1 screening:1 evaluation:2 alignment:1 mixture:1 semidefinite:1 natl:1 hubert:1 amenable:1 partial:6 necessary:1 arthur:1 wts:1 incomplete:1 divide:1 taylor:1 penalizes:1 initialized:3 plotted:1 diffrac:1 joydeep:1 minfunc:1 uncertain:3 xeon:1 modeling:2 disadvantage:1 maximization:8 subset:2 hundred:1 uniform:4 recognizing:1 tishby:1 kn:1 combined:1 thanks:1 density:1 fundamental:1 peak:1 borgwardt:2 international:1 probabilistic:13 xi1:1 off:1 physic:1 ym:1 moody:1 w1:1 squared:1 aaai:1 again:3 reflect:1 recorded:1 choose:2 lambda:1 derivative:7 toy:1 bfgs:5 summarized:3 wk:20 coefficient:3 inc:1 explicitly:1 ranking:2 view:2 francis:1 red:2 wave:1 portion:1 jul:1 contribution:1 accuracy:1 descriptor:1 variance:1 who:1 kaufmann:1 maximized:3 identify:1 yield:6 preprocess:1 ensemble:1 generalize:1 bayesian:1 drive:2 za:1 explain:1 detector:1 aligns:1 definition:3 evaluates:2 against:3 pp:1 naturally:1 associated:2 bridle:4 gain:1 dataset:9 knowledge:2 hilbert:2 holub:1 rim:48 appears:1 steve:1 supervised:23 wei:1 rand:5 formulation:1 evaluated:2 smola:2 until:1 lack:1 logistic:4 schmidtm:1 artifact:1 indicated:1 corduneanu:1 believe:1 semisupervised:1 lossy:2 usa:2 requiring:1 contain:1 normalized:1 regularization:5 assigned:2 chemical:1 illustrated:1 attractive:1 during:4 rat:1 criterion:1 theoretic:3 demonstrate:2 performs:1 reasoning:1 image:14 lazebnik:1 consideration:1 discovers:2 ari:4 regularizing:6 specialized:2 functional:1 empirically:3 volume:1 extend:1 interpretation:1 discussed:1 refer:1 cambridge:1 siapas:1 automatic:2 rd:2 tuning:1 populated:2 trivially:1 outlined:2 meila:1 language:2 shawe:1 chapelle:1 specification:1 supervision:2 behaving:1 similarity:2 enzyme:3 own:1 optimizing:1 optimizes:1 termed:1 manipulation:1 compound:2 server:1 n00014:1 binary:2 arbitrarily:1 onr:1 yi:2 caltech:10 preserving:1 captured:1 additional:3 morgan:1 george:1 maximize:4 signal:1 semi:19 ii:1 multiple:2 zien:1 harchaoui:1 gretton:1 technical:1 match:2 determination:2 constructive:1 cross:4 bach:1 retrieval:2 icdm:1 molecular:2 halt:1 scalable:1 regression:12 basic:1 implanted:1 vision:2 variant:1 iteration:2 represent:1 kernel:7 pyramid:2 penalize:1 c1:1 whereas:2 nci:1 krause:1 interval:1 publisher:1 wkt:5 induced:1 tend:2 subject:1 recording:1 comment:1 member:5 december:1 incorporates:1 jordan:1 call:2 near:1 constraining:1 bengio:6 enough:1 easy:1 xj:4 finish:1 independence:1 competing:2 opposite:1 andreas:1 idea:2 inner:2 multiclass:1 tradeoff:1 airplane:1 bottleneck:2 whether:2 motivated:2 expression:5 reuse:1 penalty:4 song:2 york:2 useful:1 nonparametric:1 locally:1 svms:1 category:29 reduced:1 http:1 nsf:1 per:5 diverse:1 dobson:1 dropping:1 express:1 group:1 four:4 clarity:1 penalizing:1 nocedal:1 graph:7 relaxation:1 pietro:1 sum:1 run:6 powerful:1 uncertainty:1 arrive:1 place:1 atwal:1 separation:4 nikil:1 decision:4 prefer:1 griffin:1 comparable:2 ki:13 guaranteed:1 nci109:4 activity:1 constraint:2 incorporation:1 alex:2 x2:6 software:1 encodes:1 scene:1 span:1 according:5 kd:1 across:2 sam:1 n4:1 making:2 maxy:1 wtk:1 explained:1 taken:2 equation:2 discus:1 singer:1 serf:1 available:1 operation:1 kwok:1 observe:1 obey:1 spectral:3 appropriate:1 away:1 enforce:2 hierarchical:1 schmidt:2 motorbike:1 algorithm1:1 top:3 clustering:37 ensure:1 include:1 newton:1 in0:1 unsuitable:1 yoram:1 restrictive:1 unchanged:1 sweep:4 objective:16 implied:1 quantity:2 spike:1 parametric:1 dependence:3 bialek:2 exhibit:1 gradient:6 september:1 thank:1 sci:1 sensible:2 evenly:1 nello:1 enforcing:1 index:9 relationship:1 providing:1 balance:11 demonstration:1 potentially:1 negative:1 implementation:2 teh:1 datasets:1 anti:1 maxk:1 extended:2 incorporated:2 variability:1 y1:1 discovered:7 reproducing:1 jebara:1 inferred:1 bk:11 david:1 pair:1 required:4 kl:3 specified:1 optimized:1 c3:1 connection:1 hanson:1 c4:1 california:2 distinction:1 learned:3 nip:5 trans:1 beyond:1 usually:1 pattern:2 xm:1 challenge:1 including:1 max:2 memory:1 unpenalized:1 belief:4 power:2 suitable:2 natural:4 regularized:7 scheme:2 improve:1 technology:2 naive:1 schmid:1 kj:4 prior:14 geometric:1 l2:2 acknowledgement:1 relative:5 law:1 fully:2 interesting:1 versus:1 validation:1 foundation:1 krausea:1 principle:2 editor:2 grandvalet:6 strehl:1 row:5 cancer:1 compatible:1 summary:1 penalized:1 supported:1 heading:1 bias:7 allow:1 institute:2 face:1 pitman:2 yor:2 distributed:1 boundary:4 xn:2 world:2 made:3 c5:1 clue:1 far:1 lippmann:1 active:1 instantiation:1 uai:1 b1:1 gomes:2 discriminative:18 xi:17 additionally:1 learn:6 nature:2 robust:1 ca:2 mol:1 schuurmans:1 complex:1 constructing:1 anthony:1 substituted:1 doig:1 whole:1 x1:8 xu:3 intel:1 ny:2 pereira:1 explicit:1 xl:1 atypical:1 third:1 learns:1 removing:1 minute:4 specific:1 decay:1 datset:1 tetrode:4 incorporating:2 effectively:1 corr:1 subtree:3 margin:6 chen:1 entropy:12 simply:1 lbfgs:1 partially:1 corresponds:2 ubc:1 truth:8 acm:1 conditional:17 goal:1 viewed:1 identity:2 formulated:1 towards:2 specifically:1 typical:1 uniformly:1 total:1 called:1 cond:1 indicating:1 mark:1 support:2 assessed:1 alexander:2 relevance:1 incorporate:4 evaluate:3 tested:1 biol:1 |
3,484 | 4,155 | A Novel Kernel for Learning a Neuron Model from
Spike Train Data
Nicholas Fisher, Arunava Banerjee
Department of Computer and Information Science and Engineering
University of Florida
Gainesville, FL 32611
{nfisher,arunava}@cise.ufl.edu
Abstract
From a functional viewpoint, a spiking neuron is a device that transforms input
spike trains on its various synapses into an output spike train on its axon. We
demonstrate in this paper that the function mapping underlying the device can be
tractably learned based on input and output spike train data alone. We begin by
posing the problem in a classification based framework. We then derive a novel
kernel for an SRM0 model that is based on PSP and AHP like functions. With
the kernel we demonstrate how the learning problem can be posed as a Quadratic
Program. Experimental results demonstrate the strength of our approach.
1
Introduction
Neurons are the predominant component of the nervous system and understanding them is a major challenge in modern neuroscience research [1]. Many neuron models have been proposed to
understand the dynamics of individual and populations of neurons. Although these models vary
in complexity, at a fundamental level they are mechanisms which transform input spike trains into
an output spike train. This view has found expression in the Quantitative Single-Neuron Modeling
competition where submitted models compete on how accurately they can predict the output spike
train of a biological neuron given an input current [2]. Since the vast majority of neurons receive
input from chemical synapses [3], a stricter stipulation would be to predict output spikes based on
input spike trains at the various synapses of the neuron. There are advantages to this variation of
the problem: complicated subthreshold fluctuations in the membrane potential need not be modeled,
since models are now judged strictly on the basis of their performance at predicting the timing of
output spikes. Models now have the liberty to focus on threshold crossings at the expense of being inaccurate in the subthreshold regime. Not only does the model better represent the functional
complexity of the input/output transformation of a neuron, comparisons to the real neuron can be
conducted in a non-invasive manner.
In this paper we learn a Spike Response Model 0 (SRM0 )[4] approximation of a neuron by only
considering the timing of all afferent (incoming) and efferent (outgoing) spikes of the neuron over
a bounded past. We begin by formulating the problem in a classification based supervised learning
framework where spike train data is labeled according to whether the neuron is about to spike, or
has recently spiked. We demonstrate that optimizing the model to properly classify this labeled data
naturally leads to a quadratic programming problem when combined with an appropriate representation of the model via a dictionary of functions. We then derive a novel kernel on spike trains which
is computed from a dictionary of post-synaptic potential (PSP) and after-hyperpolarizing potential
(AHP) like functions. Finally, experimental results are presented to demonstrate the efficacy of the
approach. For a complementary approach to learning a neuron model from spike train data, see [5].
1
An SRM0 model was chosen for several reasons. First, SRM0 has been shown to be fairly versatile
and accurate at modeling biological neurons [6]. Second, SRM0 is a relatively simple neuron model,
and therefore is likely to display better generalizability on unseen input. Finally, the disparity between the learned neuron model and the actual neuron could shed light on the various operational
modes of biological neurons. It is conceivable that the learned SRM0 model accurately predicts the
behavior of the neuron a majority of the time. However, there could be states, bursting for example,
where the prediction diverges. In such a case, the neuron can be seen as operating in two different modes, one SRM0 like, and the other not. Multiple models could then be learned to model the
neuron in its various operational modes.
2
General model of the neuron
It has been shown, that if one assumes a neuron to be a finite precision device with fading memory
and a refractory period, then the membrane potential of the neuron, P , can be modeled as a function
of the timing of the neuron?s afferent and efferent spikes which have occurred within a bounded
past [7]. Spikes that have aged past this bound, denoted by ?, are considered to have a negligible
effect on the present value of P . We denote the arrival times of spikes at synapse j using the vector
tj = htj1 , tj2 . . . tjNj i, where Nj is bounded from above by the number of spikes that can be present
in an ? window of time. t0 represents the output spike train of the neuron and vectors t1 . . . tm
represent spike trains on the input synapses. tji represents the time that has elapsed since that spike
was generated or received by the neuron. Spikes are only considered if they occurredP
within ?
m
time. We can then formalize the membrane potential function P : RN ? R, where N = j=0 Nj .
0
m
P (t , . . . , t ) is defined over the space of all spike trains and reports the present membrane potential
of the neuron. The neuron generates a spike when P (t0 , . . . , tm ) = ? and dP/dt ? 0, where ? is
the threshold of the neuron. For notational simplicity, we define the spike configuration, s ? RN ,
which represents the timing of all afferent and efferent spikes within the window of length ?. s is
the vector of vectors, s = ht0 , . . . , tm i. The neuron generates a spike when P (s) = ?, dP/dt ? 0.
As discussed in Section 1, we shall learn an SRM0 approximation of the neuron. The SRM0 model
uses a bounded past history as described above to calculate the present membrane potential of the
neuron. The present membrane potential P? is calculated as shown in Equation 1. ? models the effect
of a past generated spike, the AHP. j represents the response of the neuron to a presynaptic spike at
synapse j, the PSP. urest is the resting membrane potential. At any given time, the neuron generates
a spike if the membrane potential crosses the threshold from below (i.e., P? (s) = ?, dP? /dt ? 0).
P? (s) =
N0
X
i=1
3
?(t0i )
+
Nj
m X
X
j (tji ) + urest
(1)
j=1 i=1
Classification Problem
In order to learn an SRM0 approximation of a neuron in a non-invasive manner, we pose a supervised learning classification problem which labels the given spike train data according to whether
the neuron is about to spike or has recently spiked. We denote the former S ? and the latter S + . This
problem is equivalent to classifying subthreshold spike configurations (P? (s) < ?) from suprathreshold spike configurations (P? (s) ? ?), which leads to the classification problem shown in Equation 2.
It should be noted that the true membrane potential function, P , is a feasible solution to this problem
since P (s) < ? ?s ? S ? and P (s) ? ? ?s ? S + .
2
Min.
P? (s)
s.t. P? (s) ? ? ? 1 ?s ? S + AND P? (s) ? ? ? ?1 ?s ? S ?
(2)
To generate training data which belong to S + and S ? , we provide the spike configurations which
occur at a fixed infinitesimal time differential before and after the neuron generates a spike, as
illustrated in Figure 1(a). The spike train at the instant the neuron generated a spike is shown by
the solid lines. We shift the spike window infinitesimally into the past (future) to produce a spike
configuration s ? S ? (S + ), shown by the up (down) arrows. Notice that the spike which is currently
2
generated in the output spike train, t0 , emphasized by the dashed circle, is not included in either
spike configuration s. The reason it is not included in s ? S ? is that it simply has not been generated
at that point in time. The reason it is not included in s ? S + is twofold. First, the spike would induce
an AHP effect which would cause the membrane potential to fall below the threshold. Second, if it
were included, this would cause the classifier to only consider whether or not that particular spike
existed when classifying a given spike configuration as a member of S + or S ? . If it did exist, it
would belong to S + , and if it did not exist it would belong to S ? . Although this method would work
well for the training data, it would not generalize to unseen live spike train data.
a)
S
+
S
?
REEF as a function of ? and ?
b)
c)
t3
? = 10
? = 20
0.5
? = 30
? = 40
0.25
t2
t
Vary ? (?=0.5)
1
0
0
1
Time (ms)
Vary ? (?=20)
66
1
Past
?=5
?=10
0.5
?
?=15
10
3
5
5 0
?
0
0
100
?=0
1
0
t0
33
33
Time (ms)
66
100
Figure 1: Figure (a) depicts the spike configurations used in the classification problem. Figure (b)
shows the REEF for a fixed value of t = 1s and variable ? and ? values. Figure (c) portrays the
form of cross sections of the REEF as a function of t for different values of ? and ? .
Producing a hypersurface which can separate the supra-threshold spike configurations from the subthreshold spike configurations within the spike time feature space, would be extremely difficult.
As discussed above, if we could map a given spike configuration s to its corresponding membrane
potential P (s), then the classification problem is trivial. Although we do not have access to the
membrane potential function, we can use a linear combination of functions from a dictionary to
reproduce an approximation to the membrane potential function P . The choice of the dictionary is
crucial. By choosing a dictionary which is tailored to the form of typical PSP and AHP functions,
we increase the likelihood of successfully modeling the given neuron.
The SRM0 model is an additively separable model [8], that is, the membrane potential is a sum
Pm PNj ? j
Pij (ti )). This
of functions of the individual spikes of the spike configuration (P? (s) = j=0 i=1
feature lends itself well to modeling the membrane potential using a linear combination of dictionary
elements. The dictionary used here was one derived from a function used by MacGregor and Lewis
for neuron modeling [9]. It consists of functions (parametrized by ? and ? ) of the form
f?,? (t) =
1
? exp(??/t) ? exp(?t/? )
?
(3)
We call this the reciprocal exponential ? exponential function (REEF) dictionary. Figures1(b) and
(c) present the dictionary for various cross sections of t, ? and ? .
4
Approximation of the membrane potential function
We would like to combine members of the chosen dictionary of functions to construct an approximation of the membrane potential function, P , which will yield a solution to the classification problem
posed in Equation 2. We shall first discuss how this can be achieved in a discrete setting, where we
combine a finite number of ? and ? parametrized dictionary functions to model P . Following this
we will discuss a continuous formulation, in which we combine elements drawn from an infinite
continuous range of ? and ? parametrized dictionary functions to model P . In the context of the
continuous formulation, we will prove a specific instance of the Representer theorem which was first
shown by Kimeldorf and Wahba [10]. The Representer theorem shows that the optimal solution to
the posed classification problem must lie in the span of the data points which were used to train the
classifier. In the discrete and continuous formulation, we will first model the effect of a single spike
for simplicity. We will conclude this section by extending the continuous formulation to the case of
multiple spikes on a single synapse, and the case of multiple spikes on multiple synapses.
3
4.1
Discrete Formulation
In the discrete formulation, we wish to approximate the membrane potential function using a linear
combination of a finite, predefined set of functions from the REEF dictionary. Focusing on the
single spike case, our goal is to model the effect of a single spike on the membrane potential. We
denote this effect on the membrane potential by P? and it is defined as a linear combination of
parametrized REEF functions as shown in Equation 4. ft (?, ? ) = ?1 ? exp(??/t) ? exp(?t/? ) is
now a univariate function over t for fixed values of ? and ? . A specific set of parameter settings
{(?1 , ?1 ), . . . , (?M , ?1 ), (?1 , ?2 ), . . . , (?M , ?N )} are used to construct a P? that can best reproduce
the effect of the spike on the membrane potential. Inserting Equation 4 into Equation 2 yields a
quadratic optimization problem on the mixing coefficients ?i,j ?s.
P? (t) =
M X
N
X
?i,j ft (?i , ?j )
(4)
i=1 j=1
The major disadvantage of the discrete formulation is that for any given neuron, the optimal value set
of the ??s and ? ?s is unlikely to be known beforehand. While one can argue that the approximation
P? can be improved by increasing M and N , as the number of functions increases, so does the
dimensionality of the feature space. Since M and N can be increased independent of the size of
the training dataset, the procedure is susceptible to over-fitting. To resolve this issue, we shift to a
continuous formulation of the problem, which by virtue of the Representer theorem does not suffer
from the rising feature space dimensionality issue. The dimensionality of the feature space is now
controlled by the span of the training dataset.
4.2
Continuous formulation
In the continuous formulation, we consider L2 , the Hilbert space of square integrable functions on
the domain {?, ? } ? [0, ?)2 . We are concerned with finding a threshold dependent classification
function P? , such that P? (t) ? ? + 1 when the spike t ? S + and P? (t) ? ? ? 1 when t ? S ? . This
function is defined in Equation 5.
Z ?Z ?
P? (t) = h?(?, ? ), ft (?, ? )i =
?(?, ? )ft (?, ? ) d? d?
(5)
0
0
In this formulation, the mixing function, ?(?, ? ), is by definition a member of L2 . Therefore, if
ft (?, ? ) ? L2 , then P? (t) is finite by the Cauchy-Schwartz inequality since h?(?, ? ), ft (?, ? )i ?
k?(?, ? )k ? kft (?, ? )k < ? if both k?(?, ? )k < ? and kft (?, ? )k < ?. To show that ft (?, ? ) ?
L2 we must show hft (?, ? ), ft (?, ? )i < ?. For ease of readability we shall henceforth suppress
the domain variables in ft (?, ? ) and ?(?, ? ) and refer to them as ft and ?.
4.2.1
Proof
Z
?
?
Z
hfx , fy i =
0
=
Therefore hft , ft i =
0
x 1
y
1
?
?
exp ?
exp ?
exp ?
exp ?
d?d?
?
x
? ?
y
?
xy
(x + y)2
t?t
(t+t)2
=
(6)
(7)
1
4
< ? ?t ? [, ?) for some > 0.
We must note here that by defining the membrane potential function in this manner, we have formulated a problem which yields a solution which is different from the solution to the discrete problem.
Since the delta function centered at any arbitrary point (? ? , ? ? ) does not belong to L2 , the mixing
function ? cannot be made up of a linear combination of these delta functions, as is the case in the
discrete formulation. In addition, we are not working with a reproducing kernel Hilbert space since
we are considering L2 . However, our definition in Equation 5 defines the ?point evaluation? of our
membrane potential function.
Since P? (t) is defined using the standard inner product in L2 with respect to particular members of
L2 , we can reformulate the classification problem in Equation 2 as shown in Equation 8. Here M is
4
the number of data points, m = 1 . . . M , and ym is the corresponding classification for spike time
tm (that is, ym = +1 if tm ? S + and ym = ?1 if tm ? S ? ).
2
Min. k?k s.t. ym (h?, ftm i ? ?) ? 1 m = {1 . . . M }
(8)
We can now use a specific instance of the Representer theorem [10] to show that the optimal solution
PM
for ? to the optimization problem specified in Equation 8 can be expressed as ? = k=1 ?k ftk .
We can then substitute this equality back into Equation 8 to produce a dual formulation of the
optimization problem, which is a standard quadratic programming problem.
4.2.2
Representer Theorem
For some ?1 , ?2 , . . . ?M ? R, the solution to Equation 8 can be written in the form
M
X
?=
?k ftk
(9)
k=1
Proof We consider the subspace of L2 spanned by the REEF functions evaluated at the times of
the given training data points (span{ ftk : 1 ? k ? M }). We then consider the projection ?k of ?
on this subspace. By noting ? = ?k + ?? and rewriting Equation 8 in its Lagrangian form, we are
left with Equation 10. However, by the definition of ?? , h?? , ftk i = 0, which then simplifies the
summation term of Equation 10 to only depend upon ?k as shown in Equation 11.
2
Min. k?k +
M
X
?k 1 ? yk h?k , ftk i + h?? , ftk i ? ?
(10)
k=1
2
Min. k?k +
M
X
?k 1 ? yk h?k , ftk i ? ?
(11)
k=1
In addition, by considering the relation shown in Equation 12, we find that the first term is minimized
when ? = ?k . Hence, the optimal solution to Equation 8 will lie in the aforementioned subspace
and therefore have the form of Equation 9.
k?k2 = k?k k2 + k?? k2 ? k?k k2
(12)
4.2.3
Dual Representation
We can now substitute the form of the optimal solution shown in Equation 9 back into the original
optimization problem shown in Equation 8. This leads to the problem in Equation 13 which is
equivalent to Equation 14. The resultant quadratic programming problem is solvable given that we
have access to the positive definite matrix K, which was derived in Section 4.2.1 and is shown in
Equation 15.
2
M
*M
+
!
X
X
Min.
?k ftk
s.t. ym
?k ftk , ftm ? ? ? 1 m = {1 . . . M }
(13)
k=1
k=1
!
M X
M
M
X
X
Min.
?i ?j K(ti , tj ) s.t. ym
?k K(tk , tm ) ? ? ? 1 m = {1 . . . M }
(14)
i=1 j=1
k=1
Z ?
Z
K(ti , tj ) = hfti , ftj i =
fti ftj d? d? =
0
4.3
?
0
ti tj
(ti + tj )2
(15)
Single Synapse
We are now in a position to extend the framework to multiple spikes on a single synapse. Since
we are learning an SRM0 approximation of a neuron, we assume that the effects of spikes are
additively separable [8] and that each spike?s effect on the membrane potential for the given synapse
is identical. Introducing the latter assumption is the core contribution of this section. We first define
the threshold dependent classification function for a single spike in a manner identical to that of
the single spike formulation shown in Equation 5. This will be the ?stereotyped? effect that a spike
arriving at this synapse has on the membrane potential. Note that the AHP effect of the output spike
train can be modeled seamlessly (as a virtual synapse) in this framework.
5
4.3.1
Primal Problem
We now consider the additive effects of multiple spikes arriving at a synapse. We define the vector
m
m
th
tm = htm
data point, which consists of Nm spikes, represented by
1 , t2 , . . . , tNm i to be the m
their spike times. Note that we have abused notation. Instead of the superscript repeatedly referring
to the synapse in question, it now refers to the data point. The primal optimization problem, defined
in Equation 16, is equivalent to Equation 17.
!
Nm
X
2
Min. k?k s.t. ym
?, ftm
? ? ? 1 m = {1 . . . M }
(16)
h
h=1
*
2
Min. k?k s.t. ym
?,
Nm
X
+
ftm
h
!
??
? 1 m = {1 . . . M }
(17)
h=1
PNk
The Representer theorem states that the optimal ? must lie in span{ i=1
ftki : 1 ? k ? M }. We
omit the formal proof since it follows along the lines of the previous case. Therefore, the optimal ?
to Equation 17 will be of the form
Nk
M
X
X
?=
?k
ftki
(18)
i=1
k=1
4.3.2
Dual Problem
Substituting back Equation 18 yields the dual problem Equation 19, which can be solved given the
positive definite kernel in Equation 20.
M
2
+
!
*M
Nk
Nk
Nm
X X
X
X X
?k
ftm
Min.
ftki
s.t. ym
?k
ftki ,
? ? ? 1 m = {1 . . . M } (19)
h
i=1
k=1
p
q
K(t , t ) =
i=1
4.4
i=1
k=1
* Np
X
f ,
tp
i
Nq
X
+
f
tqk
=
k=1
h=1
Np Nq D
X
X
i=1 k=1
f ,f
tp
i
tqk
E
=
Np Nq
X
X
tpi ? tqk
i=1 k=1
(tpi + tqk )
2
(20)
Multiple Synapses
In the multiple synapse case, the principles are identical to that of the single synapse, with the
exception that spikes arriving at different synapses could have different effects on the membrane
potential, depending on the strength/type of the synaptic junction. Therefore, we keep the effects of
each synapse on the membrane potential separate by assigning each synapse its own ? function.
4.4.1
Primal Problem
Since each synapse and the output has its own ? function, this simply adds another summation term
over the S synapses and the output (indexed by 0). The primal optimization problem is defined in
Equation 21 which is equivalent to Equation 22. S is the number of synapses, Nm,s is the number
of spikes on the sth synapse of the mth data point, and tm,s
is the timing of the hth spike on the sth
h
th
synapse of the m data point.
?
?
m,s D
S
S N
E
X
X
X
2
Min.
k?s k s.t. ym ?
?s , ftm,s
? ?? ? 1 m = {1 . . . M }
(21)
h
s=0
s=0 h=1
?
?
*
+
Nm,s
S
S
X
X
X
2
Min.
k?s k s.t. ym ?
?s ,
ftm,s
? ?? ? 1 m = {1 . . . M }
h
s=0
s=0
(22)
h=1
The Representer theorem states that the optimal ?s for the sth synapses must lie in
PNk,s
span{ i=1
ftk,s : 1 ? k ? M }. This is identical to the single synapse case for each synapse,
i
6
and therefore, the optimal ?s to Equation 22 will be of the form
?s =
M
X
Nk,s
?k
ftk,s
(23)
i
i=1
k=1
4.4.2
X
Dual Problem
Substituting Equation 23 into Equation 22 yields the dual problem shown in Equation 24 which can
be solved given access to the positive definite kernel defined in Equation 25.
2
Nk,s
S
X
X
X
M
Min.
?k
(24)
ftk,s
i
s=0
k=1
i=1
?
?
*M
+
Nm,s
k,s
S
X
X N
X
X
s.t.
ym ?
?k
? ?? ? 1 m = {1 . . . M }
ftk,s ,
ftm,s
h
i
s=0
p
q
K(t , t ) =
*Np,s
S
X
X
s=0
4.5
i=1
k=1
i=1
h=1
Nq,s
f
tp,s
i
,
X
+
f
tq,s
k
=
p,s Nq,s
S N
X
X
X
2
s=0 i=1 k=1
k=1
q,s
tp,s
i ? tk
q,s
(tp,s
i + tk )
(25)
Summary
With the above kernels we are able to formulate quadratic programming problems which can be
solved with SVMlight [11]. The choice of the dictionary used to derive the kernel is critical to the
success of this technique. A dictionary of functions tailored to the forms of PSPs and AHPs will
perform better than a more general class of functions. The properties of the REEF dictionary which
make it suitable for this problem are its exponential decay as well as its additive separability [8]. This
explains why a Gaussian radial basis function (GRBF) does not work well for this problem. The
GRBF kernel is not additive. A slight variation of the GRBF which takes the sum of Gaussian
functions, rather than their product, was also explored. This performed better than the GRBF;
however it did not perform well when applied to more complicated neurons.
5
Results
To test the kernel we learned SRM0 neurons with increasing levels of complexity. We first considered a simplistic neuron which only received spikes on a single synapse. We then increased the
complexity of the neuron, by introducing AHP effects as well as different types (excitatory and inhibitory) of afferent synapses with varying synaptic weights. The PSP effect was modeled via the
classical alpha function [PSP(t) = C ? t ? exp(?t/? )] while the AHP effect was modeled by an
exponential function[AHP(t) = K ? exp(?t/? )]. Although we learned neurons with varying complexity, for want of space, we discuss here the case of a single neuron that received input spike trains
from 4 excitatory synapses and 1 inhibitory synapse to mimic the ratio of connections observed in
the cortex [12]. The stereotyped PSP for the excitatory and inhibitory synapses differed in their rise
and fall times. The parameters for the stereotyped PSP were set as follows. For the excitatory PSP,
C = 0.1 and ? = 10, where t is in units of milliseconds. For the inhibitory PSP, C = ?0.39 and
? = 5. For the AHP, K = ?16.667 and ? = 2.
We first trained the classifier using 100,000 seconds of spike train data. Only the spike configurations
occurring at fixed differentials before and after the neuron emitted a spike were considered. The input spike trains were generated using an inhomogeneous Poisson process, where the rate was varied
sinusoidally around the intended mean spike rate in order to produce a more general set of training
data. This resulted in 1,647,249 training data points, however only 10,681 of them were used in
the solution as support vectors. After training, we tested our model using 100 seconds of unseen
data. All spike configurations were considered when testing, regardless of temporal proximity
to
classifications ,
spike generation. To quantify our results, we first calculated the accuracy correct
total data points
correct positive classifications
correct negative classifications
the sensitivity
,
and
specificity
.
total positive data points
total negative data points
7
a)
b)
3
10
c)
1
5
3
10
2
10
1
10
1
10
0
?0.5
?5
?10
0
10
0
0.2
0.4
0.6
0.8
Spike Time Difference (ms)
1
0
10
0
Voltage
10
Voltage
2
Frequency
Frequency
0.5
0
20
40
60
Spike Time Difference (ms)
?1
?1.5
0
?15
20
40
60
Time (ms)
80
100
?20
0
20
40
60
Time (ms)
80
100
Figure 2: Figure (a) shows histograms of the difference in time between the actual and predicted
spike time by the learned model. Figure (b) shows the various PSP approximations (gray) in comparison to the PSP functions used by the neuron (black). Figure (c) depicts the AHP approximation
(gray) and the AHP function used by the neuron (black).
They were 0.9947, 0.9532 and 0.9948 respectively. We also calculated a histogram of how close
the spike predictions were. For every spike produced by the neuron, we determined the temporal
proximity of the closest spike time predicted by the model. We then histogrammed this data. Figure
2(a) shows two histograms depicting these calculations. The larger histogram contains predictions
with time differences varying between 0 and 70 ms, with a bin size of 1 ms while the inlaid histogram ranges from 0 to 10 ms and has a bin size of 0.1 ms. Both use a logarithmic scale on the
y-axis. From the histograms, we see that the vast majority of spikes were predicted correctly (with
a temporal proximity of 0 ms) and that out of the mispredicted spike times, the temporal proximity
of all predicted spikes fell within 70 ms of the actual spike time.
In Figures 2(b) and 2(c) we display a comparison of the approximated PSP and AHP versus the true
PSP and AHP. To calculate the classification model?s approximated PSP we artificially send a single
spike across each input synapse. We artificially generate a spike to produce the AHP approximation.
By considering the distance of the single spike data point from the classifier?s margin as the spike
ages, we can get a scaled and translated version of the PSP and AHP. The figures show these approximations scaled and translated back appropriately. In Figure 2(b) we show the approximations
of the PSPs for the input synapses. The approximations are shown in gray; the true PSPs are shown
in black. The different line styles are representative of the different synapses and therefore have
varying synaptic weights. A similar image for the AHP is shown in Figure 2(c). We note that there
are small differences between the approximated and the true functions. If the PSP and AHP approximations were exact, we would have seen perfect classification results. However, as with most
machine learning techniques, the quality of the solution is limited by the training data given.
6
Conclusion
In this paper we have developed a classification framework which uses a novel kernel derived from
a REEF dictionary to produce an SRM0 approximation of a neuron. The technique used is noninvasive in the sense that it only requires the timing of afferent and efferent spikes within a certain
bounded past. The REEF dictionary was chosen due to its similarity to PSP and AHP functions used
in a neuron model proposed by MacGregor and Lewis [9].
By producing an SRM0 approximation, which is additively separable [8], we produce a model which
is both versatile and accurate [6]. In addition, it is a relatively simple model, which allows for
increased generalizability to unseen input. The simplicity of the SRM0 model has the potential to
allow us to observe deviations between the model and the neuron, which can lead to insights on the
various behavioral modes of neurons.
Acknowledgments
This work was supported by a National Science Foundation grant (NSF IIS-0902230) to A.B.
8
References
[1] R. Jolivet, A. Roth, F. Sch?urmann, W. Gerstner, and W. Senn. Special issue on quantitative
neuron modeling. Biological Cybernetics, 99(4):237?239, 2008.
[2] W. Gerstner and R. Naud. How Good Are Neuron Models? Science, 326(5951):379?380,
2009.
[3] W. Gerstner and W. Kistler. Spiking Neuron Models: An Introduction. Cambridge University
Press New York, NY, USA, 2002.
[4] R. Jolivet, T.J. Lewis, and W. Gerstner. The spike response model: a framework to predict neuronal spike trains. Artificial Neural Networks and Neural Information Processing?
ICANN/ICONIP 2003, pages 173?173, 2003.
[5] L. Paninski, J.W. Pillow, and E.P. Simoncelli. Maximum likelihood estimation of a stochastic
integrate-and-fire neural encoding model. Neural Computation, 16(12):2533?2561, 2004.
[6] R. Jolivet, T.J. Lewis, and W. Gerstner. Generalized integrate-and-fire models of neuronal
activity approximate spike trains of a detailed model to a high degree of accuracy. Journal of
Neurophysiology, 92(2):959?976, 2004.
[7] A. Banerjee. On the phase-space dynamics of systems of spiking neurons. I: Model and experiments. Neural Computation, 13(1):161?193, 2001.
[8] Tadeusz Stanisz. Functions with separated variables. Master?s thesis, Zeszyty Naukowe Uniwerstyetu Jagiellonskiego, 1969.
[9] R.J. MacGregor and E.R. Lewis. Neural Modeling. Plenum Press, New York, 1977.
[10] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of
Mathematical Analysis and Applications, 33(1):82?95, 1971.
[11] T. Joachims. Making large-scale support vector machine learning practical. In Advances in
Kernel Methods, pages 169?184. MIT Press, 1999.
[12] E.M. Izhikevich. Simple model of spiking neurons. IEEE Transactions on Neural Networks,
14(6):1569?1572, 2003.
9
| 4155 |@word neurophysiology:1 version:1 rising:1 additively:3 gainesville:1 solid:1 versatile:2 configuration:14 contains:1 efficacy:1 disparity:1 past:8 current:1 assigning:1 must:5 kft:2 written:1 additive:3 hyperpolarizing:1 n0:1 alone:1 device:3 nervous:1 nq:5 reciprocal:1 core:1 readability:1 mathematical:1 along:1 differential:2 consists:2 prove:1 combine:3 fitting:1 behavioral:1 manner:4 behavior:1 resolve:1 actual:3 window:3 considering:4 increasing:2 begin:2 fti:1 underlying:1 bounded:5 notation:1 kimeldorf:2 developed:1 finding:1 transformation:1 nj:3 temporal:4 quantitative:2 every:1 ti:5 shed:1 stricter:1 classifier:4 k2:4 schwartz:1 scaled:2 unit:1 grant:1 omit:1 producing:2 t1:1 negligible:1 engineering:1 timing:6 before:2 positive:5 encoding:1 fluctuation:1 black:3 bursting:1 ease:1 limited:1 range:2 acknowledgment:1 practical:1 testing:1 definite:3 procedure:1 projection:1 induce:1 refers:1 radial:1 specificity:1 get:1 cannot:1 close:1 mispredicted:1 judged:1 context:1 live:1 equivalent:4 map:1 lagrangian:1 roth:1 send:1 regardless:1 formulate:1 simplicity:3 insight:1 ftm:8 spanned:1 population:1 variation:2 plenum:1 exact:1 programming:4 us:2 crossing:1 element:2 approximated:3 predicts:1 labeled:2 observed:1 ft:11 solved:3 calculate:2 ftki:4 yk:2 complexity:5 dynamic:2 trained:1 depend:1 upon:1 basis:2 translated:2 htm:1 various:7 represented:1 train:25 separated:1 abused:1 artificial:1 choosing:1 posed:3 larger:1 unseen:4 transform:1 itself:1 superscript:1 pnk:2 advantage:1 tpi:2 product:2 inserting:1 mixing:3 competition:1 supra:1 diverges:1 extending:1 produce:6 perfect:1 tk:3 derive:3 depending:1 pose:1 received:3 predicted:4 quantify:1 liberty:1 inhomogeneous:1 correct:3 tji:2 stochastic:1 centered:1 suprathreshold:1 kistler:1 virtual:1 bin:2 explains:1 biological:4 summation:2 strictly:1 proximity:4 around:1 considered:5 exp:10 mapping:1 predict:3 substituting:2 major:2 vary:3 dictionary:18 estimation:1 label:1 currently:1 successfully:1 mit:1 gaussian:2 rather:1 varying:4 voltage:2 derived:3 focus:1 joachim:1 properly:1 notational:1 likelihood:2 seamlessly:1 sense:1 dependent:2 inaccurate:1 unlikely:1 mth:1 relation:1 reproduce:2 issue:3 classification:19 dual:6 aforementioned:1 denoted:1 special:1 fairly:1 construct:2 identical:4 represents:4 representer:7 future:1 minimized:1 report:1 t2:2 np:4 spline:1 mimic:1 modern:1 resulted:1 national:1 individual:2 intended:1 phase:1 fire:2 tq:1 evaluation:1 predominant:1 light:1 primal:4 tj:5 predefined:1 accurate:2 beforehand:1 xy:1 indexed:1 circle:1 arunava:2 increased:3 instance:2 classify:1 modeling:7 sinusoidally:1 disadvantage:1 tp:5 introducing:2 deviation:1 conducted:1 generalizability:2 combined:1 referring:1 fundamental:1 sensitivity:1 ym:12 thesis:1 nm:7 henceforth:1 style:1 potential:30 coefficient:1 afferent:5 performed:1 view:1 complicated:2 contribution:1 square:1 accuracy:2 subthreshold:4 t3:1 yield:5 generalize:1 accurately:2 produced:1 cybernetics:1 history:1 submitted:1 synapsis:15 synaptic:4 definition:3 infinitesimal:1 frequency:2 invasive:2 naturally:1 proof:3 resultant:1 efferent:4 dataset:2 dimensionality:3 hilbert:2 formalize:1 back:4 focusing:1 dt:3 supervised:2 response:3 improved:1 synapse:22 formulation:14 evaluated:1 working:1 grbf:4 banerjee:2 defines:1 mode:4 quality:1 gray:3 izhikevich:1 usa:1 effect:17 true:4 former:1 equality:1 hence:1 chemical:1 illustrated:1 histogrammed:1 noted:1 macgregor:3 m:12 generalized:1 iconip:1 demonstrate:5 image:1 novel:4 recently:2 functional:2 spiking:4 refractory:1 discussed:2 occurred:1 belong:4 resting:1 extend:1 slight:1 refer:1 cambridge:1 reef:10 pm:2 pnj:1 access:3 cortex:1 operating:1 similarity:1 add:1 closest:1 own:2 optimizing:1 certain:1 inequality:1 success:1 integrable:1 seen:2 period:1 dashed:1 ii:1 multiple:8 simoncelli:1 calculation:1 cross:3 post:1 controlled:1 prediction:3 simplistic:1 poisson:1 histogram:6 kernel:13 represent:2 tailored:2 achieved:1 receive:1 addition:3 want:1 hft:2 aged:1 crucial:1 appropriately:1 sch:1 fell:1 ufl:1 member:4 call:1 emitted:1 noting:1 svmlight:1 concerned:1 psps:3 wahba:2 inner:1 simplifies:1 tm:9 shift:2 t0:4 whether:3 expression:1 suffer:1 york:2 cause:2 repeatedly:1 detailed:1 transforms:1 generate:2 exist:2 nsf:1 inhibitory:4 notice:1 millisecond:1 senn:1 neuroscience:1 delta:2 correctly:1 tqk:4 discrete:7 shall:3 threshold:7 tchebycheffian:1 drawn:1 rewriting:1 ht0:1 vast:2 sum:2 compete:1 master:1 bound:1 fl:1 display:2 existed:1 quadratic:6 stipulation:1 activity:1 strength:2 occur:1 fading:1 ftk:13 generates:4 min:12 formulating:1 extremely:1 span:5 separable:3 infinitesimally:1 relatively:2 department:1 according:2 combination:5 membrane:27 psp:18 across:1 separability:1 sth:3 making:1 spiked:2 equation:39 discus:3 mechanism:1 ahp:19 junction:1 observe:1 appropriate:1 nicholas:1 florida:1 substitute:2 original:1 assumes:1 instant:1 classical:1 naud:1 question:1 spike:104 conceivable:1 dp:3 lends:1 subspace:3 separate:2 distance:1 majority:3 parametrized:4 presynaptic:1 argue:1 cauchy:1 trivial:1 reason:3 fy:1 length:1 modeled:5 reformulate:1 ratio:1 difficult:1 susceptible:1 expense:1 negative:2 rise:1 suppress:1 tj2:1 perform:2 neuron:64 finite:4 defining:1 rn:2 varied:1 reproducing:1 arbitrary:1 specified:1 connection:1 elapsed:1 learned:7 jolivet:3 tractably:1 able:1 below:2 ftj:2 regime:1 challenge:1 program:1 memory:1 critical:1 suitable:1 predicting:1 solvable:1 t0i:1 axis:1 understanding:1 l2:9 generation:1 versus:1 age:1 foundation:1 integrate:2 degree:1 pij:1 principle:1 viewpoint:1 classifying:2 excitatory:4 summary:1 supported:1 arriving:3 formal:1 allow:1 understand:1 fall:2 calculated:3 noninvasive:1 pillow:1 made:1 hypersurface:1 transaction:1 approximate:2 alpha:1 keep:1 incoming:1 conclude:1 continuous:8 why:1 learn:3 operational:2 depicting:1 hth:1 posing:1 gerstner:5 artificially:2 domain:2 did:3 icann:1 stereotyped:3 arrow:1 arrival:1 complementary:1 neuronal:2 representative:1 depicts:2 differed:1 ny:1 axon:1 precision:1 position:1 wish:1 exponential:4 lie:4 down:1 theorem:7 specific:3 emphasized:1 tnm:1 urmann:1 explored:1 decay:1 virtue:1 portrays:1 occurring:1 margin:1 nk:5 logarithmic:1 simply:2 likely:1 univariate:1 paninski:1 expressed:1 lewis:5 goal:1 formulated:1 twofold:1 fisher:1 cise:1 feasible:1 included:4 typical:1 infinite:1 determined:1 total:3 experimental:2 exception:1 support:2 latter:2 outgoing:1 tested:1 |
3,485 | 4,156 | Learning Bounds for Importance Weighting
Corinna Cortes
Google Research
New York, NY 10011
Yishay Mansour
Tel-Aviv University
Tel-Aviv 69978, Israel
Mehryar Mohri
Courant Institute and Google
New York, NY 10012
[email protected]
[email protected]
[email protected]
Abstract
This paper presents an analysis of importance weighting for learning from finite
samples and gives a series of theoretical and algorithmic results. We point out
simple cases where importance weighting can fail, which suggests the need for an
analysis of the properties of this technique. We then give both upper and lower
bounds for generalization with bounded importance weights and, more significantly, give learning guarantees for the more common case of unbounded importance weights under the weak assumption that the second moment is bounded,
a condition related to the R?enyi divergence of the training and test distributions.
These results are based on a series of novel and general bounds we derive for unbounded loss functions, which are of independent interest. We use these bounds to
guide the definition of an alternative reweighting algorithm and report the results
of experiments demonstrating its benefits. Finally, we analyze the properties of
normalized importance weights which are also commonly used.
1 Introduction
In real-world applications of machine learning, often the sampling of the training and test instances
may differ, which results in a mismatch between the two distributions. For example, in web search
applications, there may be data regarding users who clicked on some advertisement link but little
or no information about other users. Similarly, in credit default analyses, there is typically some
information available about the credit defaults of customers who were granted credit, but no such
information is at hand about rejected costumers. In other problems such as adaptation, the training
data available is drawn from a source domain different from the target domain. These issues of
biased sampling or adaptation have been long recognized and studied in the statistics literature.
There is also a large body of literature dealing with different techniques for sample bias correction
[11, 29, 16, 8, 25, 6] or domain adaptation [3, 7, 19, 10, 17] in the recent machine learning and
natural language processing literature.
A common technique used in several of these publications for correcting the bias or discrepancy is
based on the so-called importance weighting technique. This consists of weighting the cost of errors
on training instances to emphasize the error on some or de-emphasize it on others, with the objective
of correcting the mismatch between the distributions of training and test points, as in sample bias
correction, adaptation, and other related contexts such as active learning [24, 14, 8, 19, 5]. Different
definitions have been adopted for these weights. A common definition of the weight for point x is
w(x) = P (x)/Q(x) where P is the target or test distribution and Q is the distribution according to
which training points are drawn. A favorable property of this definition, which is not hard to verify,
is that it leads to unbiased estimates of the generalization error [8].
This paper presents an analysis of importance weighting for learning from finite samples. Our study
was originally motivated by the observation that, while this corrective technique seems natural, in
some cases in practice it does not succeed. An example in dimension two is illustrated by Figure 1.
The target distribution P is the even mixture of two Gaussians centered at (0, 0) and (0, 2) both with
1
4
3
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
+
+
?
?
?
?
?
?
?
P
Ratio ? ? = 0.3
Q P
Ratio ?Q ?P = 0.75
1.0
1.0
0.8
0.8
0.6
0.6
0
?1
P
?
?
?
+
x Q
+
+
+
?
x Q
?
?
?
?
?
?
?
?
+
+
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?1
0
1
2
3
0.4
0.2
0.0
20
4
0.4
0.2
0.0
?2
?2
Error
1
Error
x
2
100
500
Training set size
5000
20
100
500
5000
Training set size
Figure 1: Example of importance weighting. Left figure: P (in blue) and Q (in red) are even mixtures
of Gaussians. The labels are positive within the unit sphere centered at the origin (in grey), negative
elsewhere. The hypothesis class is that of hyperplanes tangent to the unit sphere. Right figures: plots
of test error vs training sample size using importance weighting for two different values of the ratio
?Q /?P . The results indicate mean values of the error over 40 runs ? one standard deviation.
standard deviation ?P , while the source distribution Q is the even mixture of two Gaussians centered
at (0, 0) and (2, 0) but with standard deviation ?Q . The hypothesis class is that of hyperplanes
tangent to the unit sphere. The best classifier is selected by empirical risk minimization. As shown
in Figure 1, for ?P /?Q = .3, the error of the hypothesis learned using importance weighting is close
to 50% even for a training sample of 5,000 points and the standard deviation of the error is quite
high. In contrast, for ?P /?Q = .75, convergence occurs relatively rapidly and learning is successful.
In Section 4, we discuss other examples where importance weighting does not succeed.
The problem just described is not limited to isolated examples. Similar observations have been made
in the past in both the statistics and learning literature, more recently in the context of the analysis
of boosting by [9] who suggest that importance weighting must be used with care and highlight the
need for convergence bounds and learning guarantees for this technique.
We study the theoretical properties of importance weighting. We show using standard generalization bounds that importance weighting can succeed when the weights are bounded. However, this
condition often does not hold in practice. We also show that, remarkably, convergence guarantees
can be given even for unbounded weights under the weak assumption that the second moment of the
weights is bounded, a condition that relates to the R?enyi divergence of P and Q. We further extend
these bounds to guarantees for other possible reweightings. These results suggest minimizing a biasvariance tradeoff that we discuss and that leads to several algorithmic ideas. We explore in detail an
algorithm based on these ideas and report the results of experiments demonstrating its benefits.
Throughout this paper, we consider the case where the weight function w is known. When it is
not, it is typically estimated from finite samples. The effect of this estimation error is specifically
analyzed by [8]. This setting is closely related to the problem of importance sampling in statistics
which is that of estimating the expectation of a random variable according to P while using a sample
drawn according to Q, with w given [18]. Here, we are concerned with the effect of the weights on
learning from finite samples. A different setting is when further full access to Q is assumed, von
Neumann?s rejection sampling technique [28] can then be used. We note however that it requires w
to be bounded by some constant M , which is often not guaranteed and is the simplest case of our
bounds. Even then, the method is wasteful as it requires on average M samples to obtain one point.
The remainder of this paper is structured as follows. Section 2 introduces the definition of the R?enyi
divergences and gives some basic properties of the importance weights. In Section 3, we give generalization bounds for importance weighting in the bounded case. We also present a general lower
bound indicating the key role played by the R?enyi divergence of P and Q in this context. Section 4
deals with the more frequent case of unbounded w. Standard generalization bounds do not apply
here since the loss function is unbounded. We give novel generalization bounds for unbounded loss
functions under the assumption that the second moment is bounded (see Appendix) and use them to
derive learning guarantees for importance weighting in this more general setting. In Section 5, we
discuss an algorithm inspired by these guarantees for which we report preliminary experimental results. We also discuss why the commonly used remedy of truncating or capping importance weights
may not always provide the desired effect of improved performance. Finally, in Section 6, we study
2
the properties of an alternative reweighting also commonly used which is based on normalized importance weights, and discuss its relationship with the (unnormalized) weights w.
2 Preliminaries
Let X denote the input space, Y the label set, and let L : Y ?Y ? [0, 1] be a loss function. We denote
by P the target distribution and by Q the source distribution according to which training points are
drawn. We also denote by H the hypothesis set used by the learning algorithm and by f : X ? Y
the target labeling function.
2.1 R?enyi divergences
Our analysis makes use of the notion of R?enyi divergence, an information theoretical measure of
the difference between two distributions directly relevant to the study of importance weighting. For
? ? 0, the R?enyi divergence D? (P $Q) between distributions P and Q is defined by [23]
#??1
"
!
1
P (x)
D? (P $Q) =
log2
P (x)
.
(1)
??1
Q(x)
x
The R?enyi divergence is a non-negative quantity and for any ? > 0, D? (P $Q) = 0 iff P = Q. For
? = 1, it coincides with the relative entropy. We denote by d? (P $Q) the exponential in base 2 of
the R?enyi divergence D? (P $Q):
$!
% 1
P ? (x) ??1
D? (P "Q)
d? (P $Q) = 2
=
.
(2)
Q??1 (x)
x
2.2 Importance weights
The importance weight for distributions P and Q is defined by w(x) = P (x)/Q(x). In the following, the expectations are taken with respect to Q.
Lemma 1. The following identities hold for the expectation, second moment, and variance of w:
E[w] = 1
E[w2 ] = d2 (P $Q)
? 2 (w) = d2 (P $Q) ? 1.
(3)
Proof. The first equality is immediate. The second moment of w can be expressed as follows in
terms of the R?enyi divergence:
"
#
!
! " P (x) #2
!
P (x)
2
2
E[w ] =
w (x) Q(x) =
Q(x) =
P (x)
= d2 (P $Q).
Q
Q(x)
Q(x)
x?X
x?X
x?X
2
2
Thus, the variance of w is given by ? (w) = EQ [w ] ? EQ [w]2 = d2 (P $Q) ? 1.
&w (h) its weighted empirical loss:
For any hypothesis h ? H, we denote by R(h) its loss and by R
m
R(h) = E [L(h(x), f (x))]
x?P
!
&w (h) = 1
w(xi ) L(h(xi ), f (xi )).
R
m i=1
We shall use the abbreviated notation Lh (x) for L(h(x), f (x)), in the absence of any ambiguity
about the target function f . Note that the unnormalized importance weighting of the loss is unbiased:
! P (x)
!
E[w(x)Lh (x)] =
Lh (x) Q(x) =
P (x)Lh (x) = R(h).
Q
Q(x)
x
x
The following lemma gives a bound on the second moment.
Lemma 2. For all ? > 0 and x ? X, the second moment of the importance weighted loss can be
bounded as follows:
1
E [w2 (x) L2h (x)] ? d?+1 (P $Q) R(h)1? ? .
(4)
x?Q
For ? = 1, this becomes R(h)2 ? Ex?Q [w2 (x) L2h (x)] ? d2 (P $Q).
3
Proof. The second moment can be bounded as follows:
$
$
%2
%
!
!
??1
1
P (x)
P (x)
E [w2 (x) L2h (x)] =
Q(x)
L2h (x) =
P (x) ? L2h (x)
P (x) ?
x?Q
Q(x)
Q(x)
x
x
% ??1
$!
%? % ?1 $ !
$
?
2?
P (x)
??1
P (x) Lh (x)
(H?older?s inequality)
?
P (x)
Q(x)
x
x
$!
% ??1
?+1
?
??1
= d?+1 (P $Q)
P (x) Lh (x)Lh (x)
x
1
1
1
? d?+1 (P $Q) R(h)1? ? B 1+ ? = d?+1 (P $Q) R(h)1? ? .
3 Learning Guarantees - Bounded Case
P (x)
Note that supx w(x) = supx Q(x)
= d? (P $Q). We first examine the case d? (P $Q) < +? and use
the notation M = d? (P $Q). The following proposition follows then directly Hoeffding?s inequality.
Proposition 1 (single hypothesis). Fix h ? H. For any ? > 0, with probability at least 1 ? ?,
'
log(2/?)
&
.
|R(h) ? Rw (h)| ? M
2m
The upper bound M , though finite, can be quite large. The following theorem provides a more
favorable bound as a function of the ratio M/m when any of the moments of w, d?+1 (P $Q),
is finite, which is the case when d? (P $Q) < ? since the R?enyi divergence is a non-decreasing
function of ? [23, 2], in particular:
?? > 0,
d?+1 (P $Q) ? d? (P $Q).
(5)
Theorem 1 (single hypothesis). Fix h ? H. Then, for any ? ? 1, for any ? > 0, with probability at
least 1??, the following bound holds for the importance weighting method:
(
)
*
1
1
1? ?
2 log 1
2M
log
2
d
(P
$Q)
R(h)
?
R(h)
?+1
?
?
&w (h) +
+
.
(6)
R(h) ? R
3m
m
+
1
1
&w (h) + 2M log ? + 2d2 (P "Q) log ? .
For ? = 1 after further simplification, this gives R(h) ? R
3m
m
Proof. Let Z denote the random variable w(x) Lh (x) ? R(h). Then, |Z| ? M . By lemma 2, the
variance of the random variable Z can be bounded in terms of the R?enyi divergence d?+1 (P $Q):
1
? 2 (Z) = E[w2 (x) Lh (x)2 ] ? R(h)2 ? d?+1 (P $Q) R(h)1? ? ? R(h)2 .
Q
Thus, by Bernstein?s inequality [4], it follows that:
&w (h) > $] ? exp
Pr[R(h) ? R
"
#
?m$2 /2
.
? 2 (Z) + $M/3
Setting ? to match this upper bound shows that with probability at least 1??, the following bound
holds for the importance weighting method:
(
2 1
1
1
2
2
&w (h) + M log ? + M log ? + 2? (Z) log ? .
R(h) ? R
2
3m
9m
m
?
Using the sub-additivity of ? leads to the simpler expression
(
1
2? 2 (Z) log 1?
2M
log
?
&w (h) +
+
.
R(h) ? R
3m
m
These results can be straightforwardly extended to general hypothesis sets. In particular, for a finite
hypothesis set and for ? = 1, the application of the union bound yields the following result.
4
Theorem 2 (finite hypothesis set). Let H be a finite hypothesis set. Then, for any ? > 0, with
probability at least 1??, the following bound holds for the importance weighting method:
(
1
2M
(log
|H|
+
log
)
2d2 (P $Q)(log |H| + log 1? )
?
&w (h) +
R(h) ? R
+
.
(7)
3m
m
For infinite hypothesis sets, a similar result can be shown straightforwardly using covering numbers
instead of |H| or a related measure based on samples of size m [20].
In the following proposition, we give a lower bound that further emphasizes the role of the R?enyi
divergence of the second order in the convergence of importance weighting in the bounded case.
Proposition 2 (Lower bound). Assume that M < ? and ? 2 (w)/M 2 ? 1/m. Assume that H
contains a hypothesis h0 such that Lh0 (x) = 1 for all x. Then, there exists an absolute constant c,
c = 2/412, such that
'
$
%
,
,
d2 (P $Q) ? 1
,
&
,
Pr sup R(h) ? Rw (h) ?
? c > 0.
(8)
4m
h?H
Proof. Let ?H = suph?H ?(wLh ). If for all x ? X, Lh0 (x) = 1, then ? 2 (wLh0 ) = d2 (P $Q) ? 1 =
2
? 2 (w) = ?H
. The result then follows a general theorem, Theorem 9 proven in the Appendix.
4 Learning Guarantees - Unbounded Case
The condition d? (P $Q) < ? assumed in the previous section does not always hold, even in some
natural cases, as illustrated by the following examples.
4.1 Examples
Assume that P and Q both follow a Gaussian distribution with the standard deviations ?P and ?Q
and with means ? and ?& :
$
$
%
%
(x ? ?)2
(x ? ?& )2
1
1
?
exp ?
exp
?
P (x) = ?
Q(x)
=
.
2
2?P2
2?Q
2??P
2??Q
.
2
2
?Q
(x??)2 ??P
(x??" )2
?
P (x)
= ?Q
exp
?
In that case, Q(x)
, thus, even for ?P = ?Q and ? += ?& the
2
2
2? ?
P
P
Q
P (x)
= +?, and the bound of Theorem 1
importance weights are unbounded, d? (P $Q) = supx Q(x)
is not informative. The R?enyi divergence of the second order is given by:
%
$
/
2
?Q
(x ? ?)2 ? ?P2 (x ? ?& )2
?Q +?
P (x)dx
d2 (P $Q) =
exp ?
2
?P ??
2?P2 ?Q
$
%
/ +?
2
2?Q
(x ? ?)2 ? ?P2 (x ? ?& )2
?Q
= 2?
exp ?
dx.
2
2?P2 ?Q
?P 2? ??
?
That is, for ?Q > 22 ?P the variance of the importance weights is bounded. By the additivity
property of the R?enyi divergence, a similar situation holds for the product and sums of such Gaussian
distributions. Hence, in the rightmost example of Figure 1, the importance weights are unbounded,
but their second moment is bounded. In the next section we provide learning guarantees even for
this setting in agreement with the results observed. For ?Q = 0.3?P , the same favorable guarantees
do not hold, and, as illustrated in Figure 1, learning is significantly more difficult.
This example of Gaussians can further illustrate what can go wrong in importance weighting. Assume that ? = ?& = 0, ?Q = 1 and ?P = 10. One could have expected this to be an easy case for
importance weighting since sampling from Q provides useful information about P . The problem
is, however, that a sample from Q will contain a very small number of points far from the mean
(of either negative or positive label) and that these points will be assigned?
very large weights. For
a sample of size m and ?Q = 1, the expected value of an extreme point is 2 log m ? o(1) and its
5
2
2
weight will be in the order of m?1/?P +1/?Q = m0.99 . Therefore, a few extreme points will dominate all other weights and necessarily have a huge influence on the selection of a hypothesis by the
learning algorithm.
Another related example is when ?Q = ?P = 1 and ?& = 0. Let ? , 0 depend on the sample size
m. If ? is large enough compared to log(m), then, with high probability, all the weights will be
negligible. This is especially problematic, since the estimate of the probability of any event would
be negligible (in fact both an event and its complement). If we normalize the weights, the issue
is overcome, but then, with high probability, the maximum weight dominates the sum of all other
weights, reverting the situation back to that of the previous example.
4.2 Importance weighting learning bounds - unbounded case
As in these examples, in practice, the importance weights are typically not bounded. However, we
shall show that, remarkably, under the weak assumption that the second moment of the weights
w, d2 (P $Q), is bounded, generalization bounds can be given for this case as well. The following result relies on a general learning bound for unbounded loss functions proven in the Appendix
(Corollary 1). We denote by Pdim(U ) the pseudo-dimension of a real-valued function class U [21].
Theorem 3. Let H be a hypothesis set such that Pdim({Lh (x) : h ? H}) = p < ?. Assume that
d2 (P $Q) < +? and w(x) += 0 for all x. Then, for any ? > 0, with probability at least 1 ? ?, the
following holds:
(
2me
4
3 p log
0
p + log ?
&w (h) + 25/4 d2 (P $Q) 8
R(h) ? R
.
m
Proof. Since d2 (P $Q) < +?, the second moment of w(x)Lh (x) is finite and upper bounded by
d2 (P $Q) (Lemma 2). Thus, by Corollary 1, we can write
#
$
%
"
&w (h)
R(h) ? R
2em m$8/3
0
Pr sup
> $ ? 4 exp p log
? 5/3 ,
p
4
d2 (P $Q)
h?H
&&
where p is the pseudo-dimension of the function class H = {w(x)Lh (x) : h ? H}. We now show
that p = Pdim({Lh (x) : h ? H}). Let H & denote {Lh (x) : h ? H}. Let A = {x1 , . . . , xk } be a
set shattered by H && . Then, there exist real numbers r1 , . . . , rk such that for any subset B ? A there
exists h ? H such that
?xi ? B, w(xi )Lh (xi ) ? ri
?xi ? A ? B, w(xi )Lh (xi ) < ri .
(9)
Since by assumption w(xi ) > 0 for all i ? [1, k], this implies that
?xi ? B, Lh (xi ) ? ri /w(xi )
?xi ? A ? B, Lh (xi ) < ri /w(xi ).
(10)
Thus, H & shatters A with the witnesses si = ri /w(xi ), i ? [1, k]. Using the same observations, it is
straightforward to see that conversely, any set shattered by H & is shattered by H && .
The convergence rate of the bound is slightly weaker (O(m?3/8 )) than in the bounded case
(O(m?1/2 )). A faster convergence can be obtained however using the more precise bound of Theorem 8 at the expense of readability. The R?enyi divergence d2 (P $Q) seems to play a critical role in
the bound and thus in the convergence of importance weighting in the unbounded case.
5 Alternative reweighting algorithms
The previous analysis can be generalized to the case of an arbitrary positive function u : X ? R,
&u (h) = 1 1m u(xi )Lh (xi ) and let Q
& denote the empirical distribution.
u > 0. Let R
i=1
m
Theorem 4. Let H be a hypothesis set such that Pdim({Lh (x) : h ? H}) = p < ?. Assume that
0 < EQ [u2 (x)] < +? and u(x) += 0 for all x. Then, for any ? > 0, with probability at least 1 ? ?,
the following holds:
, )
*,
&u (h)| ? ,, E [w(x) ? u(x)]Lh (x) ,,+
|R(h) ? R
(
Q
20
3 3 p log 2me + log 4
?
p
?
EQ [u2 (x)L2h (x)], EQb [u2 (x)L2h (x)] 8
25/4 max
.
m
6
Unweighted, Ratio ?P ?Q = 0.75
Importance, Ratio ? P ? Q = 0.75
Quantile, Ratio ?P ?Q = 0.75
Capped 1%, Ratio ?P ?Q = 0.75
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
Error
1.0
Error
1.0
Error
1.0
Error
1.0
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.0
0.0
20
50
200
500
2000
0.0
20
50
Training set size
200
500
2000
0.0
20
Training set size
50
200
500
2000
20
50
Training set size
200
500
2000
Training set size
Figure 2: Comparison of the convergence of 4 different algorithms for the learning task of Figure 1:
learning with equal weights for all examples (Unweighted), Importance weighting, using Quantiles
to parameterize the function u, and Capping the largest weights.
Proof. Since R(h) = E[w(x)Lh (x)], we can write
)
*
&u (h) = E [w(x) ? u(x)]Lh (x) + E[u(x)Lh (x)] ? R
&u (h),
R(h) ? R
Q
and thus
, )
*,
&u (h)| ? , E [w(x) ? u(x)]Lh (x) , + | E[u(x)Lh (x)] ? R
&u (h)|.
|R(h) ? R
Q
&
By Corollary 2 applied to the function u Lh, | E[u(x)L
h (x)] ? Ru (h)| can be bounded by
+
?
0
3 p log 2me +log 4
p
?
with probability 1 ? ?, with
25/4 max( EQ [u2 (x)L2h (x)], EQb [u2 (x)L2h (x)]) 8
m
p = Pdim({Lh (x) : h ? H}) by a proof similar to that of Theorem 3.
The theorem suggests that other functions u than w can be used to reweight the cost of an error
on each training point by minimizing the upper bound, which
between
the bias term
?
40 is a trade-off
5
| EQ [(w(x)?u(x))Lh (x)]| and the second moment max
EQ [u2 (x)L2h (x)], EQb [u2 (x)L2h (x)] ,
where the coefficients are explicitly given. Function u can be selected from different families. Using
an upper bound on these quantities that is independent of h and a multiplicative bound of the form
20
3 0
0
4
? 5
max
E[u2 ], E[u2 ] ? E[u2 ] 1 + O(1/ m) ,
Q
Q
b
Q
leads to the following optimization problem:
0
)
*
min E |w(x) ? u(x)| + ? E[u2 ],
u?U Q
Q
(11)
where ? > 0 is a parameter controlling the trade-off between bias and variance minimization and
where U is a family of possible weight functions out of which u is selected.
Here, we consider a family of functions U parameterized by the quantiles q of the weight function
w. A function uq ? U is then defined as follows: within each quantile, the value taken by uq is the
average of w over that quantile. For small values of ?, the bias term dominates, and very fine-grained
quantiles minimize the bound of equation (11). For large values of ? the variance term dominates
and the bound is minimized by using just one quantile, corresponding to an even weighting of
the training examples. Hence by varying ? from small to large values, the algorithm interpolates
between standard importance weighting with just one example per quantile, and unweighted learning
where all examples are given the same weight. Figure 2 also shows the results of experiments for
the learning task of Figure 1 using the algorithm defined by (11) with this family of functions. The
optimal q is determined by 10-fold cross-validation. We see that a more rapid convergence can be
obtained by using these weights compared to the standard importance weights w.
Another natural family of functions is that of thresholded versions of the importance weights
{u? : ? > 0, ?x ? X, u? (x) = min(w(x), ?)}. In fact, in practice, users often cap importance weights
by choosing an arbitrary value ?. The advantage of this family is that, by definition, the weights are
7
bounded. However, in some cases, larger weights could be critical to achieve a better performance.
Figure 2 illustrates the performance of this approach. Compared to importance weighting, no change
in performance is observed until the largest 1% of the weights are capped, in which case we only
observe a performance degradation. We expect the thresholding to be less beneficial when the large
weights reflect the true w and are not an artifact of estimation uncertainties.
6 Relationship between normalized and unnormalized weights
An alternative approach based on the weight function w = P (x)/Q(x) consists of normalizing the
weights. Thus, while in the unnormalized case the unweighted empirical error is replaced by
m
m
! w(xi )
1 !
w(xi ) Lh (xi ) =
Lh (xi ),
m i=1
m
i=1
in the normalized case it is replaced by
m
!
w(xi )
i=1
W
Lh (xi ),
1
with W = m
&
= w(x)/W as the normalized importance weight. An
i=1 w(xi ). We refer to w(x)
advantage of the normalized weights is that they are by definition bounded by one. However, the
price to pay for this benefit is the fact that the weights are no more unbiased. In fact, several issues
similar to those we pointed out in the Section 4 affect the normalized weights as well.
Here, we maintain the assumption that the second moment of the importance weights is bounded
and analyze the relationship between normalized and unnormalized weights. We show that, under
this assumption, normalized and unnormalized weights are in fact very close, with high probability.
Observe that for any i ? [1, m],
%
$
%
$
w(xi )
w(xi )
W
1
1
w(x
& i) ?
=
1?
.
= w(xi )
?
m
W
m
W
m
,
, ,
,
,
W,
i) ,
,
? 1, we can write ,w(x
& i ) ? w(x
m , ? 1 ? m . Since E[w(x)] = 1, we also have
w(xi )
W
1 1m
k=1
m
Thus, since
ES [W ] =
E[w(xk )] = 1. Thus, by Corollary 2, for any ? > 0, with probability at least 1 ? ?,
the following inequality holds
(
,
,
7
6+
+
4
3 log 2me + log
,
,
W
8
?
&
, ? 25/4 max
,1 ?
(P
$
Q)
,
d
(P
$Q),
d
2
2
,
m,
m
,
,
,
i) ,
which implies the same upper bound on ,w(x
& i ) ? w(x
m ,, simultaneously for all i ? [1, m].
7 Conclusion
We presented a series of theoretical results for importance weighting both in the bounded weights
case and in the more general unbounded case under the assumption that the second moment of the
weights is bounded. We also initiated a preliminary exploration of alternative weights and showed its
benefits. A more systematic study of new algorithms based on these learning guarantees could lead
to even more beneficial and practically useful results. Several of the learning guarantees we gave
depend on the R?enyi divergence of the distributions P and Q. Accurately estimating that quantity
is thus critical and should motivate further studies of the convergence of its estimates from finite
samples. Finally, our novel unbounded loss learning bounds are of independent interest and could
be useful in a variety of other contexts.
References
[1] M. Anthony and J. Shawe-Taylor. A result of Vapnik with applications. Discrete Applied
Mathematics, 47:207 ? 217, 1993.
8
[2] C. Arndt. Information Measures: Information and its Description in Science and Engineering.
Signals and Communication Technology. Springer Verlag, 2004.
[3] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain
adaptation. NIPS, 2007.
[4] S. N. Bernstein. Sur l?extension du th?eor`eme limite du calcul des probabilit?es aux sommes de
quantit?es d?ependantes. Mathematische Annalen, 97:1?59, 1927.
[5] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML,
pages 49?56, New York, NY, USA, 2009.
[6] S. Bickel, M. Br?uckner, and T. Scheffer. Discriminative learning for differing training and test
distributions. In ICML, pages 81?88, 2007.
[7] J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. Learning bounds for domain
adaptation. NIPS 2007, 2008.
[8] C. Cortes, M. Mohri, M. Riley, and A. Rostamizadeh. Sample selection bias correction theory.
In ALT, 2008.
[9] S. Dasgupta and P. M. Long. Boosting with diverse base classifiers. In COLT, 2003.
[10] H. Daum?e III and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial
Intelligence Research, 26:101?126, 2006.
[11] M. Dud??k, R. E. Schapire, and S. J. Phillips. Correcting sample selection bias in maximum
entropy density estimation. In NIPS, 2006.
[12] R. M. Dudley. A course on empirical processes. Lecture Notes in Math., 1097:2 ? 142, 1984.
[13] R. M. Dudley. Universal Donsker classes and metric entropy. Annals of Probability, 14(4):1306
? 1326, 1987.
[14] C. Elkan. The foundations of cost-sensitive learning. In IJCAI, pages 973?978, 2001.
[15] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other
learning applications. Inf. Comput., 100(1):78?150, 1992.
[16] J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Sch?olkopf. Correcting sample
selection bias by unlabeled data. In NIPS, volume 19, pages 601?608, 2006.
[17] J. Jiang and C. Zhai. Instance Weighting for Domain Adaptation in NLP. In ACL, 2007.
[18] J. S. Liu. Monte Carlo strategies in scientific computing. Springer, 2001.
[19] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In COLT, 2009.
[20] A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In
COLT, Montr?eal, Canada, June 2009. Omnipress.
[21] D. Pollard. Convergence of Stochastic Processess. Springer, New York, 1984.
[22] D. Pollard. Asymptotics via empirical processes. Statistical Science, 4(4):341 ? 366, 1989.
[23] A. R?enyi. On measures of information and entropy. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, page 547561, 1960.
[24] H. Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[25] M. Sugiyama, S. Nakajima, H. Kashima, P. von B?unau, and M. Kawanabe. Direct importance
estimation with model selection and its application to covariate shift adaptation. In NIPS, 2008.
[26] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
[27] V. N. Vapnik. Estimation of Dependences Based on Empirical Data, 2nd ed. Springer, 2006.
[28] J. von Neumann. Various techniques used in connection with random digits. Monte Carlo
methods. Nat. Bureau Standards, 12:36?38, 1951.
[29] B. Zadrozny, J. Langford, and N. Abe. Cost-sensitive learning by cost-proportionate example
weighting. In ICDM, 2003.
9
| 4156 |@word eor:1 version:1 seems:2 nd:1 grey:1 d2:17 moment:15 liu:1 series:3 contains:1 rightmost:1 past:1 com:1 beygelzimer:1 si:1 dx:2 must:1 john:1 informative:1 plot:1 v:1 intelligence:1 selected:3 xk:2 provides:2 boosting:2 math:1 readability:1 hyperplanes:2 simpler:1 quantit:1 unbounded:14 direct:1 symposium:1 consists:2 expected:2 rapid:1 examine:1 planning:1 inspired:1 decreasing:1 little:1 clicked:1 becomes:1 estimating:2 bounded:24 notation:2 israel:1 what:1 differing:1 l2h:11 guarantee:12 pseudo:2 berkeley:1 classifier:3 wrong:1 unit:3 positive:3 negligible:2 engineering:1 initiated:1 jiang:1 acl:1 studied:1 suggests:2 conversely:1 limited:1 practice:4 union:1 digit:1 pontil:1 asymptotics:1 probabilit:1 universal:1 empirical:8 significantly:2 suggest:2 close:2 selection:5 unlabeled:1 context:4 risk:1 influence:1 customer:1 go:1 straightforward:1 truncating:1 correcting:4 haussler:1 dominate:1 notion:1 annals:1 yishay:1 target:6 play:1 user:3 controlling:1 hypothesis:16 origin:1 agreement:1 elkan:1 marcu:1 observed:2 role:3 parameterize:1 trade:2 motivate:1 depend:2 predictive:1 various:1 corrective:1 additivity:2 enyi:18 monte:2 artificial:1 labeling:1 choosing:1 h0:1 quite:2 larger:1 valued:1 loglikelihood:1 statistic:4 eqb:3 advantage:2 net:1 product:1 adaptation:10 remainder:1 frequent:1 relevant:1 rapidly:1 iff:1 achieve:1 description:1 normalize:1 olkopf:1 convergence:11 ijcai:1 neumann:2 r1:1 ben:1 derive:2 illustrate:1 ac:1 blitzer:2 eq:7 p2:5 indicate:1 implies:2 differ:1 closely:1 stochastic:1 centered:3 exploration:1 fix:2 generalization:8 preliminary:3 proposition:4 extension:1 correction:3 hold:11 practically:1 credit:3 exp:7 algorithmic:2 m0:1 arndt:1 bickel:1 favorable:3 estimation:5 label:3 sensitive:2 largest:2 weighted:3 minimization:2 always:2 gaussian:2 varying:1 publication:1 corollary:4 june:1 contrast:1 rostamizadeh:2 inference:2 shattered:3 typically:3 issue:3 colt:3 equal:1 sampling:5 icml:2 discrepancy:1 minimized:1 report:3 others:1 few:1 simultaneously:1 divergence:17 replaced:2 maintain:1 montr:1 interest:2 huge:1 introduces:1 mixture:3 analyzed:1 extreme:2 lh:32 taylor:1 maurer:1 desired:1 isolated:1 theoretical:4 instance:3 eal:1 riley:1 cost:5 deviation:5 subset:1 successful:1 wortman:1 straightforwardly:2 supx:3 density:1 borgwardt:1 systematic:1 off:2 von:3 ambiguity:1 reflect:1 huang:1 hoeffding:1 de:3 coefficient:1 explicitly:1 multiplicative:1 analyze:2 sup:2 red:1 proportionate:1 minimize:1 il:1 variance:7 who:3 yield:1 weak:3 accurately:1 emphasizes:1 carlo:2 ed:1 definition:7 proof:7 cap:1 back:1 originally:1 courant:1 follow:1 improved:1 though:1 rejected:1 just:3 smola:1 until:1 langford:2 hand:1 web:1 reweighting:3 google:3 artifact:1 scientific:1 aviv:2 usa:1 effect:3 normalized:9 verify:1 unbiased:3 remedy:1 true:1 equality:1 assigned:1 contain:1 dud:1 hence:2 illustrated:3 deal:1 covering:1 unnormalized:6 coincides:1 generalized:1 theoretic:1 omnipress:1 novel:3 recently:1 common:3 volume:1 extend:1 refer:1 phillips:1 mathematics:2 similarly:1 pointed:1 sugiyama:1 language:1 shawe:1 access:1 base:2 recent:1 showed:1 inf:1 verlag:1 inequality:4 care:1 unau:1 recognized:1 signal:1 relates:1 full:1 gretton:1 match:1 faster:1 cross:1 long:2 sphere:3 icdm:1 uckner:1 basic:1 expectation:3 metric:1 nakajima:1 somme:1 remarkably:2 fine:1 source:3 biasvariance:1 sch:1 biased:1 w2:5 bernstein:3 iii:1 easy:1 concerned:1 enough:1 variety:1 affect:1 gave:1 regarding:1 idea:2 pdim:5 tradeoff:1 br:1 shift:2 motivated:1 expression:1 granted:1 interpolates:1 york:4 pollard:2 useful:3 annalen:1 simplest:1 rw:2 schapire:1 exist:1 problematic:1 estimated:1 per:1 blue:1 diverse:1 mathematische:1 write:3 discrete:1 shall:2 dasgupta:2 key:1 demonstrating:2 drawn:4 wasteful:1 shatters:1 thresholded:1 eme:1 sum:2 run:1 parameterized:1 uncertainty:1 throughout:1 family:6 decision:1 appendix:3 bound:39 pay:1 guaranteed:1 played:1 simplification:1 fold:1 ri:5 min:2 relatively:1 structured:1 according:4 shimodaira:1 beneficial:2 slightly:1 em:1 son:1 pr:3 taken:2 equation:1 discus:5 abbreviated:1 fail:1 reverting:1 ependantes:1 adopted:1 available:2 gaussians:4 apply:1 observe:2 kawanabe:1 dudley:2 uq:2 kashima:1 alternative:5 corinna:2 bureau:1 nlp:1 log2:1 daum:1 quantile:5 especially:1 objective:1 quantity:3 occurs:1 strategy:1 dependence:1 link:1 me:4 ru:1 sur:1 relationship:3 zhai:1 ratio:8 minimizing:2 difficult:1 expense:1 reweight:1 negative:3 upper:7 observation:3 finite:11 zadrozny:1 immediate:1 situation:2 extended:1 witness:1 precise:1 communication:1 mansour:3 arbitrary:2 abe:1 canada:1 david:1 complement:1 connection:1 learned:1 nip:5 capped:2 mismatch:2 kulesza:1 max:5 tau:1 event:2 critical:3 natural:4 older:1 technology:1 cim:1 literature:4 calcul:1 tangent:2 relative:1 loss:10 expect:1 highlight:1 lecture:1 suph:1 proven:2 validation:1 foundation:1 penalization:1 thresholding:1 elsewhere:1 mohri:4 course:1 guide:1 bias:9 weaker:1 institute:1 absolute:1 limite:1 benefit:4 overcome:1 default:2 dimension:3 world:1 unweighted:4 commonly:3 made:1 far:1 emphasize:2 dealing:1 active:2 assumed:2 xi:30 discriminative:1 search:1 why:1 tel:2 improving:1 du:2 mehryar:1 necessarily:1 anthony:1 domain:8 body:1 x1:1 quantiles:3 scheffer:1 ny:3 wiley:1 sub:1 pereira:2 exponential:1 comput:1 donsker:1 weighting:33 advertisement:1 capping:2 grained:1 theorem:11 rk:1 covariate:2 pac:1 nyu:1 cortes:2 alt:1 dominates:3 normalizing:1 exists:2 vapnik:3 importance:50 nat:1 illustrates:1 rejection:1 entropy:4 explore:1 expressed:1 u2:11 springer:4 relies:1 succeed:3 identity:1 price:1 absence:1 hard:1 change:1 specifically:1 infinite:1 determined:1 lemma:5 degradation:1 called:1 experimental:1 e:3 indicating:1 crammer:2 aux:1 ex:1 |
3,486 | 4,157 | Fast detection of multiple change-points shared by
many signals using group LARS
Jean-Philippe Vert and Kevin Bleakley
Mines ParisTech CBIO, Institut Curie, INSERM U900
{firstname.lastname}@mines-paristech.fr
Abstract
We present a fast algorithm for the detection of multiple change-points when each
is frequently shared by members of a set of co-occurring one-dimensional signals.
We give conditions on consistency of the method when the number of signals
increases, and provide empirical evidence to support the consistency results.
1
Introduction
Finding the place (or time) where most or all of a set of one-dimensional signals (or profiles) jointly
change in some specific way is an important question in several fields. A first common situation is
when we want to find change-points in a multidimensional signal, for instance, we may want to automatically detect changes from human speech to other sound in a movie, based on data representation
of features coming from both the audio and visual tracks [1]. Another important situation is when we
are confronted with several 1-dimensional signals which we believe share common change-points,
e.g., genomic profiles of several patients. The latter application is increasingly important in biology
and medicine, in particular for the detection of copy-number variation along the genome, though it
is also useful for microarray and genetic linkage studies [2]. The common thread in all of these is
the search for data patterns shared by a set of patients at precise places on the genome; in particular,
sudden changes in measurement. As opposed to the segmentation of multi-dimensional signals such
as speech, the length of the signal (i.e., the number of probes along the genome) is fixed for a given
technology while the number of signals (i.e., the number of patients) can increase. It is therefore of
interest to develop method to identify multiple change-points shared by several signals which can
benefit from increasing the number of profiles.
There exists a vast literature on the change-point detection problem [3, 4]. Here we focus on the
problem of approximating a multidimensional signal by a piecewise-constant one, using quadratic
error criteria. It is well-known that the optimal segmentation of a p-dimensional signal of length
n into k segments can be obtained in O(n2 pk) by dynamic programming [5, 6, 7]. The quadratic
complexity in n2 is however prohibitive in applications such as genomics, where n can be in the order of 105 to 107 with current technology. An alternative to such global procedures, which estimate
change-points as solutions of a global optimization problem, are fast local procedures such as binary
segmentation [8], which detect breakpoints by iteratively applying a method for single change-point
detection to the segments obtained after the previous change-point is detected. While such recursive
methods can be extremely fast, in the order of O(np log(k)) when the single change-point detector
is O(np), quality of segmentation is questionable when compared with global procedures [9].
For p = 1 (a single signal), an interesting alternative to these global and local procedures is to
express the optimal segmentation as the solution of a convex optimization problem, using the (convex) total variation instead of the (non-convex) number of jumps to penalize a piecewise-constant
function, in order to approximate the original signal [10, 11]. The resulting piecewise-constant approximation of the signal, defined as the global minimum of the objective function, benefits from
1
theoretical guaranties in terms of correctly detecting change-points [12, 13], and can be implemented
efficiently in O(nk) or O(n log(n)) [14, 12, 15].
In this paper we propose an extension of total-variation based methods for single signals to the
multidimensional setting, in order to approximate a multidimensional signal by a piecewise constant signal with multiple change-points. We define the approximation as the solution of a convex
optimization problem, which involves a quadratic approximation error penalized by the `1 norm of
increments of the function. The problem can be reformulated as a group LASSO problem, which we
propose to solve approximately with a group LARS procedure [16]. Using the particular structure
of the design matrix, we can find the first k change-points in O(npk), extending the method of [12]
to the multidimensional setting.
Unlike most previous theoretical investigations of change-point methods, we are not interested in
the case where the dimension p is fixed and the length of the profiles n increases, but in the opposite
situation where n is fixed and p increases. Indeed, this corresponds to the case in genomics where,
for example, n would be the fixed number of probes used to measure a signal along the genome,
and p the number of samples or patients analyzed. We want to design a method that benefits from
increasing p in order to identify shared change-points, even though the signal-to-noise ratio may be
very low within each signal. As a first step towards this question, we give conditions under which
our method is able to consistently identify a single change-point as p increases. We also show by
simulation that our method is able to consistently identify multiple change-points, as p ? +?,
validating its relevance in practical settings. To conclude, we present possible applications of the
method in the study of copy number variations in cancer.
2
Notation
For any two integers u ? v, let [u, v] denote
the interval {u, u + 1, . . . , v}. For any u ? v matrix
qP
u Pv
2
M we note Mi,j its (i, j)-th entry. kM k =
i=1
j=1 Mi,j is its Frobenius norm (or Euclidean
norm in the case of vectors). For any subsets of indices A = a1 , . . . , a|A| ? [1, u]|A| and B =
b1 , . . . , b|B| ? [1, v]|B| , we denote by MA,B the |A| ? |B| matrix with entries Mai ,bj for (i, j) ?
[1, |A|] ? [1, |B|]. For simplicity we will use ? instead of [1, u] or [1, v], i.e., Ai,? is the i-th row of
A and A?,j is the j-th column of A. We note 1u,v the u ? v matrix of ones, and Ip the p ? p identity
matrix.
3
Formulation
We consider p profiles of length n, stored in an n ? p matrix Y . The i-th profile Y?,i =
(Y1,i , . . . , Yn,i ) is the i-th column of Y . We assume that each profile is a piecewise-constant signal
corrupted by noise, and that change-points locations tend to be shared across profiles. Our goal is
to detect these shared change-points, and benefit from the possibly large number p of profiles to
increase the statistical power of change-point detection.
When p = 1 (single profile), a popular method to find change-points in a signal is to approximate it
by a piecewise constant function using total variation (TV) denoising [10], i.e., to solve
minn k Y ? U k2 + ?
U ?R
n?1
X
| Ui+1 ? Ui | .
(1)
i=1
For a given ? > 0, the solution U ? Rn of (1) is piecewise-constant and its change-points are
predicted to be those of Y . Adding penalties proportional to the `1 ot `2 norm of U to (1) does
not change the position of the change-points detected [11, 17], and the capacity of TV denoising to
correctly identify change-points when n increases has been investigated in [12, 13].
Here we propose to generalize TV denoising to multiple profiles by considering the following convex
optimization problem, for Y ? Rn?p :
min k Y ? U k2 + ?
U ?Rn?p
n?1
X
i=1
2
k Ui+1,? ? Ui,? k .
(2)
The second term in (2) penalizes the sum of Euclidean norms of the increments of U , seen as
a time-dependent multidimensional vector. Intuitively, this penalty will enforce many increments
Ui+1,? ? Ui,? to collapse to 0, just like the total variation in (1). As a result the solution of (2)
provides an approximation of the profiles Y by a n ? p matrix of piecewise-constant profiles U
which share change-points. In the following, we propose a fast algorithm to approximately solve (2)
(Section 4), discuss theoretically whether the solution identifies correctly the change-points (Section
5), and provide an empirical evaluation of the method (Section 6).
4
Implementation
Although (2) is a convex optimization problem that can in principle be solved by general-purpose
solvers [18], we are often working in dimensions that can reach millions, making this approach
impractical. Moreover, we would ideally like to obtain solutions for various values of ?, corresponding to various numbers of change-points, in order to be able to select the optimal number of
change-points using various statistical criteria. In the single profile case (p = 1), [14] proposed a
fast coordinate descent-like method, [12] showed how to find the first k change-points iteratively in
O(nk), and [15] proposed an O(n ln(n)) method to find all change-points. However, none of these
methods is applicable directly to the p > 1 setting since they all rely on specific properties of the
p = 1 case, such as the fact that the solution is piecewise-affine in ? and that the set of change-points
is monotically decreasing with ?.
In order to propose a fast method to solve (2) in the p > 1 setting, let us first reformulate it as
a group LASSO regression problem [16]. To this end, we make the change of variables (?, ?) ?
R(n?1)?p ? R1?p given by:
? = U1,? ,
?i,? = Ui+1,? ? Ui,? for i = 1, . . . , n ? 1 .
In other words ?i,j is the jump between the i-th and the (i + 1)-th positions of the j-th profile. We
immediately get an expression of U as a function of ? and ?:
U1,? = ? ,
Ui,? = ? +
i?1
X
?j,?
for i = 2, . . . , n .
j=1
This can be rewritten in matrix form as
U = 1n,1 ? + X? ,
where X is the n ? (n ? 1) matrix with entries Xi,j = 1 for i > j. Making this change of variable,
we can re-express (2) as follows:
n?1
X
k ?i,? k .
(3)
min
k Y ? X? ? 1n,1 ? k2 + ?
??R(n?1)?p ,??R1?p
i=1
For any ? ? R(n?1)?p , the minimum in ? is reached for ? = 11,n (Y ? X?)/n. Plugging this into
(3), we get that the matrix of jumps ? is solution of
n?1
X
? k2 + ?
min
k Y? ? X?
k ?i,? k ,
(4)
??R(n?1)?p
i=1
? are obtained from Y and X by centering each column.
where Y? and X
Equation 4 is a group LASSO problem, with a particular design matrix and particular groups of
features. Since existing methods to exactly solve group LASSO regression problems remain difficult
to apply here ? in particular we do not want to store in memory the n ? (n ? 1) design matrix when
n is in the millions ? we propose to approximate instead the solution of (4) with the group LARS
strategy, which was proposed by [16] as a good approximation to the regularization path of the group
LASSO. More precisely, the group LARS approximates the solution path of (4) with a piecewiseaffine set of solutions, and iteratively finds change-points. While the original group LARS method
requires storing and manipulation of the design matrix [16], which we can not afford here, we can
? allows
extend technical results of [12] to show that the particular structure of the design matrix X
efficient computation of matrix inverses and products.
3
? > R in O(np) time and memory.
Lemma 1. For any R ? Rn?p , we can compute C = X
Lemma 2. For any A = a1 , . . . , a|A| , set of distinct indices with 1 ? a1 < . . . < a|A| ? n, the
?> X
?
matrix X
?,A ?,A is invertible, and for any |A| ? p matrix R, the matrix
?1
> ?
? ?,A
C= X
X?,A
R
can be computed in O(|A|p) time and memory.
Proof of these results can be found in Supplementary Materials.
Algorithm 1 describes the fast group LARS method to approximately solve (4). At each subsequent iteration to find the next change-point, we follow steps 3?8 which have maximum complexity
O(np), resulting in O(npk) complexity in time and O(np) in memory to find the first k changepoints with the fast group LARS algorithm.
Algorithm 1 Fast group LARS algorithm
Require: centered data Y? , number of breakpoints k.
1: Initialize r = Y? , A = ?.
2: for i = 1 to k do
? > r using Lemma 1.
3:
Compute c? = X
4:
If i = 1, find the first breakpoint : a
? = argmin j?[1,n] k c?j,? k, A = {?
a}.
?1
> ?
?
?Aw
5:
Descent direction: compute w = XA,? XA,?
c?A,? using Lemma 2, then uA = X
>
?
with cumulative sums, then a = X uA using Lemma 1.
6:
Descent step: for each u ? [1, n] \A, find if it exists the smallest positive solution ?u of the
second-order polynomial in ?:
k c?u,? ? ?au,? k2 = k c?v,? ? ?av,? k2 ,
where v is any element of A.
7:
Find the next breakpoint: u
? = argmin [1,p]\A ?u .
8:
Update A = A ? {?
u} and r = r ? au? uA .
9: end for
5
Theoretical analysis
In this section, we study theoretically to what extent the estimator (2) recovers correct change-points.
The vast majority of existing theoretical results for offline segmentation and change-point detection
consider the setting where p is fixed (usually p = 1), and n increases. This typically corresponds to
a setting where we can sample a continuous signal with increasing density, and wish to locate more
precisely the underlying change-points as the density increases.
Here we propose a radically different analysis, motivated by applications in genomics. Here, the
length of profiles n is fixed for a given technology, but the number of profiles p can increase when
more biological samples or patients are analyzed. The property we would like to study is then, for
a given change-point detection method, whether increasing p for fixed n allows us to locate more
precisely the change-points. While this simply translates our intuition that increasing the number of
profiles should increase the statistical power of change-point detection, and while this property was
empirically observed in [2], we are not aware of previous theoretical results in this setting.
5.1
Consistent estimation of a single change-point
As a first step towards the analysis of this "fixed n increasing p" setting, let us assume that the
observed centered profiles Y? are obtained by adding noise to a set of profiles with a single shared
change-point between positions u and u + 1, for some u ? [1, n ? 1]. In other words, we assume
that
? ?+W ,
Y? = X?
?
?
where ? is an (n?1)?p matrix of zeros except for the u-th row ?u,?
, and W is a noise matrix whose
entries are assumed to be independent and identically distributed with respect to a centered Gaussian
4
distribution with variance ? 2 . In this section we study the probability that the first breakpoint found
by our procedure
is the correct one, when p increases. We therefore consider an infinite sequence of
Pk
?
? 2
) , we assume that ??2 = limk?? ??k2 exists and
jumps ?u,i
,
and letting ??k2 = 1/k i=1 (?u,i
i?1
is finite. We first show that, as p increases, the first selected change-point is always the given by the
same formula.
Lemma 3. Assume, without loss of generality, that u ? n/2. When p ? +?, the first change-point
selected is
2
i2 (n ? u)
i (n ? i)
u
? = argmax ??2
+ ?2
.
(5)
2
n
n
i?[1,u]
with probability tending to 1.
From this we easily deduce under which condition the correct change point is selected, i.e., when
u
? = u:
Theorem 4. Let ? = u/n and
1
)
(1 ? ?)2 (? ? 2n
?
??2 = n??2
.
1
1
? ? 2 ? 2n
(6)
When ? 2 < ?
??2 , the probability that the first selected change-point is the correct one tends to 1 as
p ? +?. When ? 2 > ?
??2 , it is not the correct one with probability tending to 1.
This theorem, whose proof along with that of Lemma 3 can be found in Supplementary Materials,
deserves several comments.
? To detect a change-point at position u = ?n, the noise level ? 2 must not be larger than
the critical value ?? given by (7), hence the method is not consistent for all positions. ??
increases monotonically from ? = 1/2 to 1, meaning that change-points near the boundary
are more difficult to detect correctly than change-points near the center. The most difficult
change point is the last one (u = n ? 1) which can only be detected consistently if ? 2 is
smaller than
2??2
2
?
?1?1/n
=
+ o(n?1 ).
n
? For a given level of noise ? 2 , change-point detection is asymptotically correct for any
2
? ? [, 1 ? ], where satisfies ? 2 = ?
?1?
, i.e.,
s
?2
=
+ o(n?1/2 ) .
2n??2
This shows in particular that increasing the profile length n increases the interval where
change-points are correctly identified, and that we can get as close as possible to the boundary for n large enough.
? When ? 2 < ??2 then the correct change-point is found consistently when p increases,
showing the benefit of the accumulation of many profiles.
? It is possible to make the detection of the first change-point consistent uniformly over the
full signal, by simply subtracting the term p? 2 i(n?i)/n from k c?i,? k2 , which is maximized
over i to select the first change-point. Then, a simple modification of Lemma 3 shows that,
as p ? +?, any given change-point is a.s. found. However, this modification, easy to do
for the first change-point, is not obvious to extend to successive change-points detected by
group LARS. We consider it an interesting future challenge to develop variants of the group
LARS iterative segmentation method whose performance does not depend on the position
of the change points.
5.2
Consistent estimation of a single change-point with fluctuating position
An interesting variant of the problem of detecting a change-point common to many profiles is that of
detecting a change-point with similar location in many profiles, allowing fluctuations in the precise
5
location of the change-point. This can be modeled by assuming that the profiles are random, and that
the i-th profile has a change-point of value ?i at position Ui , where (?i , Ui )i=1,...,p are independent
and identically distributed according to a distribution P = P? ? PU (i.e., we assume ?i independent
from Ui ). We denote ??2 = EP? ? 2 and pi = PU (U = i) for i ? [1, n ? 1]. Assuming that the
support of PU is [a, b] with 1 ? a ? b ? n ? 1, the following result extends Theorem 4 by showing
that, under a condition on the noise level, the first change-point discovered is indeed in the support
of PU :
Theorem 5. Let ? = U/n be the random position of the change-point on [0, 1] and ?m = a/n and
?M = b/n the position of the left and right boundaries of the support of PU scaled to [0, 1]. Let
also
1
2
2
2
2 (1 ? E?) + var(?) (?m ? 2n )
?
?
?PU = n?
.
(7)
1
?m ? 21 ? 2n
If 1/2 ? (?m , ?M ), then for any ? 2 the probability that the first selected change-point is in the
support of P tends to 1 as p ? +?. If 1/2 < ?m , then the probability that the first selected
change-point is in the support of P tends to 1 when ? 2 < ?
??2 , . When ? 2 > ?
??2 , it is not the correct
one with probability tending to 1.
This theorem, whose proof is postponed to Supplementary Materials, illustrates the robustness of
the method to handle fluctuations in the precise position of the change-point shared between the
profiles. Although this situation rarely occurs when we are considering classical multidimensional
signals such as financial time series or video signals, it is likely to be the rule when we consider
profiles coming from different biological samples. Although the theorem only gives a condition
on the noise level to ensure that the selected change-point lies in the support of the distribution of
change-point locations, a precise estimate of the location of the selected change-point as a function
of PU , which generalizes Lemma 3, is given in the proof.
5.3
The case of multiple change-points
While the theoretical results presented above focus on the detection of a single change-point, the
real interest of the method is to estimate multiple change-points. The extension of Theorem 4 to
this setting is beyond the scope of this paper, and is postponed for future efforts. We nevertheless
conjecture here that we can consistently estimate multiple change-points under conditions on the
level of noise (not too large), the distance between them (not to small), and the correlations between
their jumps (not too large). Indeed, following the ideas in the proof of Theorem 4, we must analyze
the path of the vectors (?
ci,... ), and check that, for some ? in (2), they reach their maximum norm
precisely at the true change-points. The situation is more complicated than in the single changepoint case since the vectors (?
ci,... ) must hit a hypersphere at each correct change-point, and must
remain strictly within the hypersphere between consecutive change-points. This can be ensured if the
noise level is not too high (like in the single change-point case), and if the positions corresponding
to successive change-points on the hypersphere are far enough from each other. In practice this
translates to conditions that two successive change-points should not be too close to each other, and
that profiles should have, if possible, independent jumps (direction, etc.). We provide experimental
results below that confirm that, when the noise is not too large, we can indeed correctly identify
several change-points, with a probability of success increasing to 1 as p increases.
6
Experiments
In this section we give experimental evidence both for theoretical O(npk) complexity and Theorem
4. Figure 1 shows linearity in each of p, n and k respectively whilst fixing the other two variables,
confirming the O(npk) complexity.
To test Theorem 4, we considered signals of length 100, each with a unique change-point located
at position u. We fixed ? = 0.8; assuming for simplicity that each signal jumps a height of 1 at
the change-point, we get ??2 = 1, and it is then easy to calculate the critical value ?
??2 = 10.78. We
2
set the variance of the centered Gaussian noise added to each signal to ?
?? , and ran 1000 trials for
each u. we expect that for 50 ? u < 80 there is convergence in accuracy to 1, and for u > 80,
convergence in accuracy to zero. This is indeed what is seen in Figure 2 (left panel), with u = 80
the limit case between the two different modes of convergence.
6
(a )
0.4
(c)
(b )
1.8
0.35
1.6
0.35
0.3
1.4
0.25
0.25
0.2
ti me ( s)
1.2
t i me (s)
ti me (s )
0.3
1
0.8
0.2
0.15
0.6
0.15
0.1
0.4
0.1
0.05
0.05
0.2
0
10
0
20
p
0
5000
10000
n
0
0
20
40
k
1
1
0.8
0.8
u=50
u=60
u=70
u=80
u=90
0.6
0.4
Accuracy
Accuracy
Figure 1: Speed trials. (a) CPU time for finding 50 change-points when there are 2000 probes and the
number of profiles varies from 1 to 20. (b) CPU time when finding 50 change-points with the number of
profiles fixed at 20 and the number of probes varying from 1000 to 10000 in intervals of 1000. (c) CPU time
for 20 profiles and 2000 probes when selecting from 1 to 50 change-points.
0.4
0.2
0.2
0
u=50
u=60
u=70
u=80
u=90
0.6
0
0
100
200
300
400
0
100
200
300
400
p
p
Figure 2: Single change-point accuracy. Accuracy as a function of the number of profiles p when the
change-point is placed in a variety of positions from: u = 50 to u = 90 (left panel), or: u = 50 ? 2 to
u = 90 ? 2 (right panel), for a signal of length 100.
The right-hand-side panel of Figure 2 shows results for the same trials except that change-point
locations can vary uniformly in the interval u ? 2. As predicted by Theorem 5, we see that the
accuracy of the method remains extremely robust against fluctuations in the exact change-point
location.
To investigate the potential for extending the results of the article to the case of many shared
change-points, we further simulated profiles of length 100 with a change-point at all of positions
10, 20, . . . , 90. The jump at each change-point was drawn from a centered Gaussian with variance
1. We then fixed various values of ? 2 and looked at convergence in accuracy as the number of
signals increased. One thousand trials were performed for each ? 2 , and results are presented in
Figure 3. Denoting ? the set of change-point locations {10, 20, . . . , 90} , it appears that a critical
value ?
??2 exists and lies close to 0.27; below 0.27 we have convergence in accuracy to 1, and above,
convergence to zero.
An interesting application of the fast group LARS method is in the joint segmentation of copynumber profiles. For a set of individuals with the same disease (e.g. a type of cancer), we expect
there to be regions of the genome which are frequently gained (potentially containing oncogenes) or
lost (potentially containing tumor suppressor genes) in many or all of the patients. These regions are
separated by change-points. Figure 4 shows Chromosome 8 of three bladder cancer copy-number
profiles. We see that in the region of probe 60, a copy number change occurs on all three profiles.
Though it is not in exactly the same place on all profiles, the sharing of information across profiles
7
1
Accuracy
0.8
sigma2=0.05
sigma2= 0.1
sigma2= 0.2
sigma2=0.27
sigma2= 0.4
0.6
0.4
0.2
0
0
1000
2000
3000
4000
5000
p
Figure 3: Multiple change-point accuracy. Accuracy as a function of the number of profiles p when
change-points are placed at the nine positions {10, 20, . . . , 90} and the value of ? 2 is varied from 0.1 to 0.4.
The profile length is 100.
allows the approximate location to be found. The bottom right panel shows the smoothed profiles
superimposed on the same axes. A promising use of these smoothed signals, beyond visualization
of many profiles simultaneously, is to detect regions of frequent gain of loss by testing the average
profile values on each segment for significant positive (gain) or negative (loss) values. Preliminary
experiments on simulated and real data suggest that our method is more accurate and two orders of
magnitude faster than the state-of-the-art H-HMM [19] method for that purpose.
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
0
50
100
?1
150
0
50
Probe
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
0
50
100
150
100
150
Probe
100
?1
150
Probe
0
50
Probe
Figure 4: Segmented and smoothed bladder cancer copy-number profiles. Probes shown are
located on Chromosome 8. A shared change-point hotspot is found in the region of probe 60.
7
Conclusion
We have proposed a framework that extends total-variation based approximation to the multidimensional setting, developed a fast algorithm to approximately solve it, shown theoretically that
the method can consistently estimate change-points, and validated the results experimentally. We
have not discussed the problem of choosing the number of change-points, and suggest in practice
to use existing criteria for this purpose [6, 7]. We observed both theoretically and empirically that
increasing the number of profiles is highly beneficial to detect shared change-points
Acknowledgements We thank Zaid Harchaoui and Francis Bach for useful discussions. This work
was supported by ANR grants ANR-07-BLAN-0311-03 and ANR-09-BLAN-0051-04.
8
References
[1] Z. Harchaoui, F. Vallet, A. Lung-Yut-Fong, and O. Cappe. A regularized kernel-based approach
to unsupervised audio segmentation. In ICASSP ?09: Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 1665?1668, Washington,
DC, USA, 2009. IEEE Computer Society.
[2] N. R. Zhang, D. O. Siegmund, H. Ji, and J. Li. Detecting simultaneous change-points in
multiple sequences. Biometrika, 97(3):631?645, 2010.
[3] M. Basseville and N. Nikiforov. Detection of abrupt changes: theory and application. Information and System Sciences Series. Prentice Hall Information, 1993.
[4] B. Brodsky and B. Darkhovsky. Nonparametric Methods in Change-Point Problems. Kluwer
Academic Publishers, 1993.
[5] Y. C. Yao. Estimating the number of change-points via schwarz criterion. Stat. Probab. Lett.,
6:181?189, 1988.
[6] L. Birg? and P. Massart. Gaussian model selection. J. Eur. Math. Soc., 3:203?268, 2001.
[7] M. Lavielle and G. Teyssi?re. Detection of multiple change-points in multivariate time series.
Lithuanian Mathematical Journal, 46(3):287?306, 2006.
[8] L. J. Vostrikova. Detection of disorder in multidimensional stochastic processes. Soviet Mathematics Doklady, 24:55?59, 1981.
[9] M. Lavielle and Teyssi?re. Adaptive detection of multiple change-points in asset price volatility. In G. Teyssi?re and A. Kirman, editors, Long-Memory in Economics, pages 129?156.
Springer Verlag, Berlin, 2005.
[10] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms.
Physica D, 60:259?268, 1992.
[11] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the
fused lasso. J. R. Stat. Soc. Ser. B Stat. Methodol., 67(1):91?108, 2005.
[12] Z. Harchaoui and C. Levy-Leduc. Catching change-points with lasso. In J.C. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 617?624. MIT Press, Cambridge, MA, 2008.
[13] A. Rinaldo. Properties and refinements of the fused lasso. Ann. Stat., 37(5B):2922?2952,
2009.
[14] J. Friedman, T. Hastie, H. H?fling, and R. Tibshirani. Pathwise coordinate optimization. Ann.
Appl. Statist., 1(1):302?332, 2007.
[15] H. Hoefling. A path algorithm for the Fused Lasso Signal Approximator. Technical Report
0910.0526v1, arXiv, Oct. 2009.
[16] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J.
R. Stat. Soc. Ser. B, 68(1):49?67, 2006.
[17] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse
coding. J. Mach. Learn. Res., 11:19?60, 2010.
[18] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York,
NY, USA, 2004.
[19] S.P. Shah, W.L. Lam, R.T. Ng, and K.P. Murphy. Modeling recurrent DNA copy number
alterations in array CGH data. Bioinformatics, 23(13):i450?i458, 2007.
9
| 4157 |@word trial:4 polynomial:1 norm:6 km:1 simulation:1 series:3 selecting:1 genetic:1 denoting:1 existing:3 current:1 must:4 subsequent:1 confirming:1 zaid:1 update:1 prohibitive:1 selected:8 rudin:1 sudden:1 hypersphere:3 detecting:4 provides:1 math:1 location:9 successive:3 zhang:1 height:1 mathematical:1 along:4 yuan:1 theoretically:4 indeed:5 frequently:2 multi:1 decreasing:1 automatically:1 cpu:3 considering:2 increasing:9 solver:1 ua:3 estimating:1 notation:1 moreover:1 underlying:1 linearity:1 panel:5 what:2 argmin:2 developed:1 whilst:1 finding:3 impractical:1 sapiro:1 multidimensional:9 ti:2 questionable:1 exactly:2 ensured:1 k2:9 scaled:1 hit:1 biometrika:1 doklady:1 grant:1 ser:2 yn:1 platt:1 positive:2 local:2 tends:3 limit:1 mach:1 path:4 fluctuation:3 approximately:4 monotically:1 au:2 appl:1 co:1 collapse:1 factorization:1 practical:1 unique:1 testing:1 recursive:1 practice:2 lost:1 procedure:6 empirical:2 vert:1 boyd:1 word:2 suggest:2 get:4 close:3 selection:2 prentice:1 applying:1 bleakley:1 accumulation:1 center:1 economics:1 convex:7 simplicity:2 abrupt:1 immediately:1 disorder:1 estimator:1 rule:1 array:1 vandenberghe:1 financial:1 handle:1 variation:8 increment:3 coordinate:2 siegmund:1 exact:1 programming:1 element:1 located:2 observed:3 ep:1 bottom:1 solved:1 calculate:1 thousand:1 region:5 knight:1 ran:1 disease:1 intuition:1 ui:12 complexity:5 ideally:1 mine:2 dynamic:1 depend:1 segment:3 easily:1 joint:1 icassp:1 various:4 soviet:1 separated:1 distinct:1 fast:12 detected:4 kevin:1 choosing:1 saunders:1 jean:1 whose:4 guaranty:1 solve:7 supplementary:3 larger:1 anr:3 jointly:1 ip:1 online:1 confronted:1 sequence:2 propose:7 subtracting:1 lam:1 coming:2 product:1 fr:1 frequent:1 roweis:1 frobenius:1 convergence:6 extending:2 r1:2 volatility:1 develop:2 recurrent:1 stat:5 fixing:1 soc:3 implemented:1 predicted:2 involves:1 direction:2 correct:9 lars:11 stochastic:1 centered:5 human:1 bladder:2 material:3 require:1 investigation:1 preliminary:1 biological:2 extension:2 strictly:1 physica:1 considered:1 hall:1 scope:1 bj:1 changepoint:1 vary:1 consecutive:1 smallest:1 purpose:3 estimation:3 applicable:1 schwarz:1 grouped:1 mit:1 genomic:1 gaussian:4 always:1 hotspot:1 varying:1 blan:2 ax:1 focus:2 validated:1 ponce:1 consistently:6 check:1 superimposed:1 detect:7 dependent:1 typically:1 koller:1 interested:1 art:1 initialize:1 darkhovsky:1 field:1 aware:1 washington:1 ng:1 biology:1 unsupervised:1 breakpoint:3 future:2 report:1 np:5 piecewise:9 leduc:1 simultaneously:1 fling:1 individual:1 murphy:1 argmax:1 friedman:1 detection:16 interest:2 investigate:1 highly:1 evaluation:1 analyzed:2 accurate:1 institut:1 euclidean:2 penalizes:1 re:5 catching:1 theoretical:7 instance:1 column:3 increased:1 modeling:1 deserves:1 entry:4 subset:1 too:5 stored:1 aw:1 corrupted:1 varies:1 fong:1 rosset:1 eur:1 density:2 international:1 invertible:1 fused:3 yao:1 yut:1 opposed:1 containing:2 possibly:1 li:1 potential:1 alteration:1 coding:1 performed:1 analyze:1 francis:1 reached:1 lung:1 complicated:1 curie:1 accuracy:12 variance:3 efficiently:1 maximized:1 identify:6 generalize:1 none:1 asset:1 detector:1 simultaneous:1 reach:2 sharing:1 centering:1 against:1 obvious:1 proof:5 mi:2 recovers:1 gain:2 popular:1 segmentation:9 cappe:1 appears:1 follow:1 formulation:1 though:3 generality:1 hoefling:1 just:1 xa:2 correlation:1 working:1 hand:1 nonlinear:1 mode:1 quality:1 believe:1 usa:2 true:1 regularization:1 hence:1 iteratively:3 i2:1 lastname:1 criterion:4 meaning:1 common:4 tending:3 qp:1 empirically:2 ji:1 million:2 extend:2 discussed:1 approximates:1 kluwer:1 measurement:1 significant:1 cambridge:2 ai:1 smoothness:1 consistency:2 mathematics:1 deduce:1 pu:7 etc:1 multivariate:1 showed:1 manipulation:1 store:1 verlag:1 binary:1 success:1 postponed:2 seen:2 minimum:2 monotonically:1 signal:35 multiple:13 sound:1 full:1 harchaoui:3 segmented:1 technical:2 faster:1 academic:1 bach:2 long:1 lin:1 a1:3 plugging:1 variant:2 regression:3 patient:6 arxiv:1 iteration:1 kernel:1 penalize:1 want:4 interval:4 microarray:1 publisher:1 ot:1 kirman:1 unlike:1 limk:1 massart:1 comment:1 tend:1 validating:1 member:1 vallet:1 integer:1 near:2 identically:2 enough:2 easy:2 variety:1 sigma2:5 hastie:1 lasso:9 opposite:1 identified:1 idea:1 translates:2 thread:1 whether:2 expression:1 motivated:1 linkage:1 effort:1 penalty:2 reformulated:1 speech:3 york:1 afford:1 nine:1 useful:2 nonparametric:1 statist:1 dna:1 mai:1 track:1 correctly:6 tibshirani:2 express:2 group:17 nevertheless:1 drawn:1 v1:1 vast:2 asymptotically:1 sum:2 inverse:1 place:3 extends:2 breakpoints:2 quadratic:3 precisely:4 u1:2 speed:1 extremely:2 min:3 conjecture:1 tv:3 according:1 across:2 remain:2 increasingly:1 describes:1 smaller:1 beneficial:1 making:2 modification:2 osher:1 intuitively:1 ln:1 equation:1 visualization:1 remains:1 discus:1 singer:1 letting:1 end:2 generalizes:1 changepoints:1 rewritten:1 nikiforov:1 probe:12 apply:1 fluctuating:1 enforce:1 birg:1 alternative:2 robustness:1 shah:1 lithuanian:1 original:2 ensure:1 medicine:1 npk:4 approximating:1 classical:1 society:1 objective:1 question:2 added:1 occurs:2 looked:1 strategy:1 distance:1 thank:1 simulated:2 capacity:1 majority:1 hmm:1 berlin:1 me:3 extent:1 assuming:3 length:10 fatemi:1 index:2 minn:1 reformulate:1 ratio:1 modeled:1 difficult:3 potentially:2 negative:1 u900:1 design:6 implementation:1 allowing:1 av:1 finite:1 descent:3 philippe:1 situation:5 precise:4 y1:1 rn:4 locate:2 discovered:1 varied:1 smoothed:3 dc:1 acoustic:1 able:3 beyond:2 usually:1 pattern:1 firstname:1 below:2 sparsity:1 challenge:1 memory:5 video:1 power:2 critical:3 rely:1 regularized:1 methodol:1 zhu:1 movie:1 technology:3 identifies:1 cbio:1 genomics:3 probab:1 literature:1 acknowledgement:1 removal:1 loss:3 expect:2 interesting:4 proportional:1 var:1 approximator:1 affine:1 consistent:4 article:1 principle:1 editor:2 storing:1 share:2 pi:1 row:2 cancer:4 penalized:1 placed:2 last:1 copy:6 supported:1 offline:1 side:1 sparse:1 benefit:5 distributed:2 boundary:3 dimension:2 lett:1 cumulative:1 genome:5 cgh:1 jump:8 adaptive:1 refinement:1 far:1 oncogene:1 approximate:5 gene:1 inserm:1 confirm:1 global:5 suppressor:1 mairal:1 b1:1 conclude:1 assumed:1 xi:1 search:1 continuous:1 iterative:1 promising:1 learn:1 chromosome:2 robust:1 investigated:1 pk:2 noise:13 profile:46 n2:2 ny:1 position:16 pv:1 wish:1 lie:2 levy:1 formula:1 theorem:11 specific:2 showing:2 evidence:2 exists:4 adding:2 gained:1 ci:2 magnitude:1 illustrates:1 occurring:1 nk:2 simply:2 likely:1 visual:1 rinaldo:1 pathwise:1 brodsky:1 springer:1 corresponds:2 radically:1 satisfies:1 ma:2 lavielle:2 oct:1 goal:1 identity:1 ann:2 towards:2 basseville:1 shared:12 price:1 change:116 paristech:2 experimentally:1 infinite:1 except:2 uniformly:2 denoising:3 lemma:9 tumor:1 total:6 experimental:2 rarely:1 select:2 support:7 latter:1 relevance:1 bioinformatics:1 audio:2 |
3,487 | 4,158 | Beyond Actions: Discriminative Models for
Contextual Group Activities
Tian Lan
School of Computing Science
Simon Fraser University
[email protected]
Yang Wang
Department of Computer Science
University of Illinois at Urbana-Champaign
[email protected]
Weilong Yang
School of Computing Science
Simon Fraser University
[email protected]
Greg Mori
School of Computing Science
Simon Fraser University
[email protected]
Abstract
We propose a discriminative model for recognizing group activities. Our model
jointly captures the group activity, the individual person actions, and the interactions among them. Two new types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework.
Different from most of the previous latent structured models which assume a predefined structure for the hidden layer, e.g. a tree structure, we treat the structure of
the hidden layer as a latent variable and implicitly infer it during learning and inference. Our experimental results demonstrate that by inferring this contextual information together with adaptive structures, the proposed model can significantly
improve activity recognition performance.
1
Introduction
Look at the two persons in Fig. 1(a), can you tell they are doing two different actions? Once the
entire contexts of these two images are revealed (Fig. 1(b)) and we observe the interaction of the
person with other persons in the group, it is immediately clear that the first person is queuing, while
the second person is talking. In this paper, we argue that actions of individual humans often cannot
be inferred alone. We instead focus on developing methods for recognizing group activities by
modeling the collective behaviors of individuals in the group.
Before we proceed, we first clarify some terminology used throughout the rest of the paper. We use
action to denote a simple, atomic movement performed by a single person. We use activity to refer
to a more complex scenario that involves a group of people. Consider the examples in Fig. 1(b),
each frame describes a group activity: queuing and talking, while each person in a frame performs
a lower level action: talking and facing right, talking and facing left, etc.
Our proposed approach is based on exploiting two types of contextual information in group activities. First, the activity of a group and the collective actions of all the individuals serve as context
(we call it the group-person interaction) for each other, hence should be modeled jointly in a unified
framework. As shown in Fig. 1, knowing the group activity (queuing or talking) helps disambiguate
individual human actions which are otherwise hard to recognize. Similarly, knowing most of the
persons in the scene are talking (whether facing right or left) allows us to infer the overall group
activity (i.e. talking). Second, the action of an individual can also benefit from knowing the actions
of other surrounding persons (which we call the person-person interaction). For example, consider
Fig. 1(c). The fact that the first two persons are facing the same direction provides a strong cue that
1
(a)
(b)
(c)
Figure 1: Role of context in group activities. It is often hard to distinguish actions from each individual person
alone (a). However, if we look at the whole scene (b), we can easily recognize the activity of the group and the
action of each individual. In this paper, we operationalize on this intuition and introduce a model for recognizing
group activities by jointly consider the group activity, the action of each individual, and the interaction among
certain pairs of individual actions (c).
both of them are queuing. Similarly, the fact that the last two persons are facing each other indicates
they are more likely to be talking.
Related work: Using context to aid visual recognition has received much attention recently. Most
of the work on context is in scene and object recognition. For example, work has been done on exploiting contextual information between scenes and objects [13], objects and objects [5, 16], objects
and so-called ?stuff? (amorphous spatial extent, e.g. trees, sky) [11], etc.
Most of the previous work in human action recognition focuses on recognizing actions performed
by a single person in a video (e.g. [2, 17]). In this setting, there has been work on exploiting contexts
provided by scenes [12] or objects [10] to help action recognition. In still image action recognition,
object-action context [6, 9, 23, 24] is a popular type of context used for human-object interaction.
The work in [3] is the closest to ours. In that work, person-person context is exploited by a new
feature descriptor extracted from a person and its surrounding area.
Our model is directly inspired by some recent work on learning discriminative models that allow
the use of latent variables [1, 6, 15, 19, 25], particularly when the latent variables have complex
structures. These models have been successfully applied in many applications in computer vision,
e.g. object detection [8, 18], action recognition [14, 19], human-object interaction [6], objects and
attributes [21], human poses and actions [22], image region and tag correspondence [20], etc. So
far only applications where the structures of latent variables are fixed have been considered, e.g. a
tree-structure in [8, 19]. However in our applications, the structures of latent variables are not fixed
and have to be inferred automatically.
Our contributions: In this paper, we develop a discriminative model for recognizing group activities. We highlight the main contributions of our model. (1) Group activity: most of the work
in human activity understanding focuses on single-person action recognition. Instead, we present
a model for group activities that dynamically decides on interactions among group members. (2)
Group-person and person-person interaction: although contextual information has been exploited
for visual recognition problems, ours introduces two new types of contextual information that have
not been explored before. (3) Adaptive structures: the person-person interaction poses a challenging
problem for both learning and inference. If we naively consider the interaction between every pair of
persons, the model might try to enforce two persons to have take certain pairs of labels even though
these two persons have nothing to do with each other. In addition, selecting a subset of connections allows one to remove ?clutter? in the form of people performing irrelevant actions. Ideally, we
would like to consider only those person-person interactions that are strong. To this end, we propose
to use adaptive structures that automatically decide on whether the interaction of two persons should
be considered. Our experimental results show that our adaptive structures significantly outperform
other alternatives.
2
Contextual Representation of Group Activities
Our goal is to learn a model that jointly captures the group activity, the individual person actions, and
the interactions among them. We introduce two new types of contextual information, group-person
2
(a)
(b)
Figure 2: Graphical illustration of the model in (a). The edges represented by dashed lines indicate the connections are latent. Different types of potentials are denoted by lines with different colors in the example shown in
(b).
interaction and person-person interaction. Group-person interaction represents the co-occurrence
between the activity of a group and the actions of all the individuals. Person-person interaction
indicates that the action of an individual can benefit from knowing the actions of other people in the
same scene. We present a graphical model representing all the information in a unified framework.
One important difference between our model and previous work is that in addition to learning the
parameters in the graphical model, we also automatically infer the graph structures (see Sec. 3).
We assume an image has been pre-processed (i.e. by running a person detector) so the persons in the
image have been found. On the training data, each image is associated with a group activity label,
and each person in the image is associated with an action label.
2.1
Model Formulation
A graphical representation of the model is shown in Fig. 2. We now describe how we model an
image I. Let I1 , I2 , . . . , Im be the set of persons found in the image I, we extract features x from
the image I in the form of x = (x0 , x1 , . . . , xm ), where x0 is the aggregation of feature descriptors
of all the persons in the image (we call it root feature vector), and xi (i = 1, 2, . . . , m) is the feature
vector extracted from the person Ii . We denote the collective actions of all the persons in the image
as h = (h1 , h2 , . . . , hm ), where hi ? H is the action label of the person Ii and H is the set of all
possible action labels. The image I is associated with a group activity label y ? Y, where Y is the
set of all possible activity labels.
We assume there are connections between some pairs of action labels (hj , hk ). Intuitively speaking,
this allows the model to capture important correlations between action labels. We use an undirected
graph G = (V, E) to represent (h1 , h2 , . . . , hm ), where a vertex vi ? V corresponds to the action
label hi , and an edge (vj , vk ) ? E corresponds to the interactions between hj and hk .
We use fw (x, h, y; G) to denote the compatibility of the image feature x, the collective action labels
h, the group activity label y, and the graph G = (V, E). We assume fw (x, h, y; G) is parameterized
by w and is defined as follows:
fw (x, h, y; G) = w> ?(y, h, x; G)
X
X
X
= w0> ?0 (y, x0 ) +
w1> ?1 (xj , hj ) +
w2> ?2 (y, hj ) +
w3> ?3 (y, hj , hk )
j?V
j?V
(1a)
(1b)
j,k?E
The model parameters w are simply the combination of four parts, w = {w1 , w2 , w3 , w4 }. The
details of the potential functions in Eq. 1 are described in the following:
Image-Action Potential w1> ?1 (xj , hj ): This potential function models the compatibility between
the j-th person?s action label hj and its image feature xj . It is parameterized as:
X
>
w1> ?1 (xj , hj ) =
w1b
1(hj = b) ? xj
(2)
b?H
where xj is the feature vector extracted from the j-th person and we use 1(?) to denote the indicator
function. The parameter w1 is simply the concatenation of w1b for all b ? H.
3
Action-Activity Potential w2> ?2 (y, hj ): This potential function models the compatibility between
the group activity label y and the j-th person?s action label hj . It is parameterized as:
XX
w2> ?2 (y, hj ) =
w2ab ? 1(y = a) ? 1(hj = b)
(3)
a?Y b?H
Action-Action Potential w3> ?3 (y, hj , hk ): This potential function models the compatibility between a pair of individuals? action labels (hj , hk ) under the group activity label y, where (j, k) ? E
corresponds to an edge in the graph. It is parameterized as:
XXX
w3> ?3 (y, hj , hk ) =
w3abc ? 1(y = a) ? 1(hj = b) ? 1(hk = c)
(4)
a?Y b?H c?H
Image-Activity Potential w0> ?0 (y, x0 ): This potential function is a root model which measures the
compatibility between the activity label y and the root feature vector x0 of the whole image. It is
parameterized as:
X
>
w0> ?0 (y, x0 ) =
w0a
1(y = a) ? x0
(5)
a?Y
The parameter w0a can be interpreted as a root filter that measures the compatibility of the class
label a and the root feature vector x0 .
3
Learning and Inference
We now describe how to infer the label given the model parameters (Sec. 3.1), and how to learn the
model parameters from a set of training data (Sec. 3.2). If the graph structure G is known and fixed,
we can apply standard learning and inference techniques of latent SVMs. For our application, a
good graph structure turns out to be crucial, since it determines which person interacts (i.e. provides
action context) with another person. The interaction of individuals turns out to be important for
group activity recognition, and fixing the interaction (i.e. graph structure) using heuristics does not
work well. We will demonstrate this experimentally in Sec. 4. We instead develop our own inference
and learning algorithms that automatically infer the best graph structure from a particular set.
3.1
Inference
Given the model parameters w, the inference problem is to find the best group activity label y ? for a
new image x. Inspired by the latent SVM [8], we define the following function to score an image x
and a group activity label y:
Fw (x, y) = max max fw (x, hy , y; Gy ) = max max w> ?(x, hy , y; Gy )
Gy
Gy
hy
hy
(6)
We use the subscript y in the notations hy and Gy to emphasize that we are now fixing on a particular
activity label y. The group activity label of the image x can be inferred as: y ? = arg maxy Fw (x, y).
Since we can enumerate all the possible y ? Y and predict the activity label y ? of x, the main
difficulty of solving the inference problem is the maximization over Gy and hy according to Eq. 6.
Note that in Eq. 6, we explicitly maximize over the graph G. This is very different from previous
work which typically assumes the graph structure is fixed.
The optimization problem in Eq. 6 is in general NP-hard since it involves a combinatorial search.
We instead use an coordinate ascent style algorithm to approximately solve Eq. 6 by iterating the
following two steps:
1. Holding the graph structure Gy fixed, optimize the action labels hy for the hx, yi pair:
hy = arg max
w> ?(x, h0 , y; Gy )
0
h
(7)
2. Holding hy fixed, optimize graph structure Gy for the hx, yi pair:
Gy = arg max
w> ?(x, hy , y; G 0 )
0
G
4
(8)
The problem in Eq. 7 is a standard max-inference problem in an undirected graphical model. Here
we use loopy belief propagation to approximately solve it. The problem in Eq. 8 is still an NP-hard
problem since it involves enumerating all the possible graph structures. Even if we can enumerate
all the graph structures, we might want to restrict ourselves to a subset of graph structures that will
lead to efficient inference (e.g. when using loopy BP in Eq. 7). One obvious choice is to restrict
G 0 to be a tree-structured graph, since loopy BP is exact and tractable for tree structured models.
However, as we will demonstrate in Sec. 4, the tree-structured graph built from simple heuristic (e.g.
minimum spanning tree) does not work that well. Another choice is to choose graph structures that
are ?sparse?, since sparse graphs tend to have fewer cycles, and loopy BP tends to be efficient in
graphs with fewer cycles. In this paper, we enforce the graph sparsity by setting a threshold d on
the maximum degree of any vertex in the graph. When hy is fixed, we can formulate an integer
linear program (ILP) to find the optimal graph structure (Eq. 8) with the additional constraint that
the maximum vertex degree is at most d. Let zjk = 1 indicate that the edge (j, k) is included in the
graph, and 0 otherwise. The ILP can be written as:
XX
X
X
max
zjk ?jk , s.t.
zjk ? d,
zjk ? d, zjk = zkj , zjk ? {0, 1}, ?j, k
(9)
z
j?V k?V
j?V
k?V
where we use ?jk to collectively represent the summation of all the pairwise potential functions in
Eq. 1 for the pairs of vertices (j, k). Of course, the optimization problem in Eq. 9 is still hard due
to the integral constraint zjk ? {0, 1}. But we can relax the value of zjk to a real value in the range
of [0, 1]. The solution of the LP relaxation might have fractional numbers. To get integral solutions,
we simply round them to the closest integers.
3.2
Learning
Given a set of N training examples hxn , hn , y n i (n = 1, 2, . . . , N ), we would like to train the model
parameter w that tends to produce the correct group activity y for a new test image x. Note that the
action labels h are observed on training data, but the graph structure G (or equivalently the variables
z) are unobserved and will be automatically inferred. A natural way of learning the model is to adopt
the latent SVM formulation [8, 25] as follows:
N
X
1
||w||2 + C
?n
2
n=1
min
w,??0,Gy
(10a)
s.t. max fw (xn , hn , y n ; Gyn ) ? max max fw (xn , hy , y; Gy ) ? ?(y, y n ) ? ?n , ?n, ?y (10b)
Gyn
Gy
hy
n
where ?(y, y ) is a loss function measuring the cost incurred by predicting y when the groundtruth label is y n . In standard multi-class classification problems, we typically use the 0-1 loss ?0/1
defined as:
1 if y 6= y n
n
?0/1 (y, y ) =
(11)
0 otherwise
The constrained optimization problem in Eq. 10 can be equivalently written as an unconstrained
problem:
min
w,?
N
X
1
||w||2 + C
(Ln ? Rn )
2
n=1
(12a)
where Ln = max max max(?(y, y n ) + fw (xn , hy , y; Gy )), Rn = max fw (xn , hn , y n ; Gyn )(12b)
y
hy
Gy
Gyn
We use the non-convex bundle optimization in [7] to solve Eq. 12. In a nutshell, the algorithm
iteratively builds an increasingly accurate piecewise quadratic approximation to the objective function. During each iteration, a new linear cutting plane is found via a subgradient of the objective
function and added to the piecewise quadratic approximation. Now the key issue is to compute two
subgradients ?w Ln and ?w Rn for a particular w, which we describe in detail below.
First we describe how to compute ?w Ln . Let (y ? , h? , G ? ) be the solution to the following optimization problem:
max max max ?(y, y n ) + fw (xn , h, y; G)
(13)
y
h
G
5
(a)
(b)
(c)
(d)
Figure 3: Different structures of person-person interaction. Each node here represents a person in a frame. Solid
lines represent connections that can be obtained from heuristics. Dashed lines represent latent connections that
will be inferred by our algorithm. (a) No connection between any pair of nodes; (b) Nodes are connected
by a minimum spanning tree; (c) Any two nodes within a Euclidean distance ? are connected (which we call
?-neighborhood graph); (d) Connections are obtained by adaptive structures. Note that (d) is the structure of
person-person interaction of the proposed model.
Then it is easy to show that the subgradient ?w Ln can be calculated as ?w Ln = ?(xn , y ? , h? ; G ? ).
The inference problem in Eq. 13 is similar to the inference problem in Eq. 6, except for an additional
term ?(y, y n ). Since the number of possible choices of y is small (e.g.|Y| = 5) in our case), we can
enumerate all possible y ? Y and solve the inference problem in Eq. 6 for each fixed y.
Now we describe how to compute ?w Rn , let G? be the solution to the following optimization problem:
max
fw (xn , hn , y n ; G 0 )
(14)
0
G
? The
Then we can show that the subgradient ?w Rn can be calculated as ?w Rn = ?(xn , y n , hn ; G).
problem in Eq. 14 can be approximately solved using the LP relaxation of Eq. 9. Using the two
subgradients ?w Ln and ?w Rn , we can optimize Eq. 10 using the algorithm in [7].
4
Experiments
We demonstrate our model on the collective activity dataset introduced in [3]. This dataset contains
44 video clips acquired using low resolution hand held cameras. In the original dataset, all the
persons in every tenth frame of the videos are assigned one of the following five categories: crossing,
waiting, queuing, walking and talking, and one of the following eight pose categories: right, frontright, front, front-left, left, back-left, back and back-right. Based on the original dataset, we define
five activity categories including crossing, waiting, queuing, walking and talking. We define forty
action labels by combining the pose and activity information, i.e. the action labels include crossing
and facing right, crossing and facing front-right, etc. We assign each frame into one of the five
activity categories, by taking the majority of actions of persons (ignoring their pose categories) in
that frame. We select one fourth of the video clips from each activity category to form the test set,
and the rest of the video clips are used for training.
Rather than directly using certain raw features (e.g. the HOG descriptor [4]) as the feature vector
xi in our framework, we train a 40-class SVM classifier based on the HOG descriptor of each
individual and their associated action labels. In the end, each feature vector xi is represented as a
40-dimensional vector, where the k-th entry of this vector is the score of classifying this instance
to the k-th class returned by the SVM classifier. The root feature vector x0 of an image is also
represented as a 40-dimensional vector, which is obtained by taking an average over all the feature
vectors xi (i = 1, 2, ..., m) in the same image.
Results and Analysis: In order to comprehensively evaluate the performance of the proposed
model, we compare it with several baseline methods. The first baseline (which we call global bag-ofwords) is a SVM model with linear kernel based on the global feature vector x0 with a bag-of-words
style representation. The other baselines are within our proposed framework, with various ways of
setting the structures of the person-person interaction. The structures we have considered are illustrated in Fig. 3(a)-(c), including (a) no pairwise connection; (b) minimum spanning tree; (c) graph
obtained by connecting any two vertices within a Euclidean distance ? (?-neighborhood graph) with
? = 100, 200, 300. Note that in our proposed model the person-person interactions are latent (shown
in Fig. 3(d)) and learned automatically. The performance of different structures of person-person in6
(a)
(b)
Figure 4: Confusion matrices for activity classification: (a) global bag-of-words (b) our approach. Rows are
ground-truths, and columns are predictions. Each row is normalized to sum to 1.
Method
global bag-of-words
no connection
minimum spanning tree
?-neighborhood graph, ? = 100
?-neighborhood graph, ? = 200
?-neighborhood graph, ? = 300
Our Approach
Overall
70.9
75.9
73.6
74.3
70.4
62.2
79.1
Mean per-class
68.6
73.7
70.0
72.9
66.2
62.5
77.5
Table 1: Comparison of activity classification accuracies of different methods. We report both the overall and
mean per-class accuracies due to the class imbalance. The first result (global bag-of-words) is tested in the
multi-class SVM framework, while the other results are in the framework of our proposed model but with
different structures of person-person interaction. The structures are visualized in Fig. 3.
teraction are evaluated and compared. We summarize the comparison in Table 1. Since the test set is
imbalanced, e.g. the number of crossing examples is more than twice that of the queuing or talking
examples, we report both overall and mean per-class accuracies. As we can see, for both overall and
mean per-class accuracies, our method achieves the best performance. The proposed model significantly outperforms global bag-of-words. The confusion matrices of our method and the baseline
global bag-of-words are shown in Fig. 4. There are several important conclusions we can draw from
these experimental results:
Importance of group-person interaction: The best result of the baselines comes from no connection between any pair of nodes, which clearly outperforms global bag-of-words. It demonstrates the
effectiveness of modeling group-person interaction, i.e. connection between y and h in our model.
Importance of adaptive structures of person-person interaction: In Table 1, the pre-defined
structures such as the minimum spanning tree and the ?-neighborhood graph do not perform as well
as the one without person-person interaction. We believe this is because those pre-defined structures
are all based on heuristics and are not properly integrated with the learning algorithm. As a result,
they can create interactions that do not help (and sometimes even hurt) the performance. However, if
we consider the graph structure as part of our model and directly infer it using our learning algorithm,
we can make sure that the obtained structures are those useful for differentiating various activities.
Evidence for this is provided by the big jump in terms of the performance by our approach.
We visualize the classification results and the learned structure of person-person interaction of our
model in Fig. 6.
5
Conclusion
We have presented a discriminative model for group activity recognition which jointly captures the
group activity, the individual person actions, and the interactions among them. We have exploited
two new types of contextual information: group-person interaction and person-person interaction.
We also introduce an adaptive structures algorithm that automatically infers the optimal structure of
person-person interaction in a latent SVM framework. Our experimental results demonstrate that
our proposed model outperforms other baseline methods.
7
(a)
(b)
(c)
(d)
(e)
(f)
Figure 5: Visualization of the weights across pairs of action classes for each of the five activity classes. Light
cells indicate large values of weights. Consider the example (a), under the activity label crossing, the model
favors seeing actions of crossing with different poses together (indicated by the area bounded by the red box).
We can also take a closer look at the weights within actions of crossing, as shown in (f). we can see that within
the crossing category, the model favors seeing the same pose together, indicated by the light regions along the
diagonal. It also favors some opposite poses, e.g. back-right with front-left. These make sense since people
always cross street in either the same or the opposite directions.
Crossing
Waiting
Queuing
Walking
Talking
Figure 6: (Best viewed in color) Visualization of the classification results and the learned structure of personperson interaction. The top row shows correct classification examples and the bottom row shows incorrect
examples. The labels C, S, Q, W, T indicate crossing, waiting, queuing, walking and talking respectively. The
labels R, FR, F, FL, L, BL, B, BR indicate right, front-right, front, front-left, left, back-left, back and back-right
respectively. The yellow lines represent the learned structure of person-person interaction, from which some
important interactions for each activity can be obtained, e.g. a chain structure which connects persons facing
the same direction is ?important? for the queuing activity.
8
References
[1] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning.
In Advances in Neural Information Processing Systems, 2003.
[2] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In IEEE
International Conference on Computer Vision, 2005.
[3] W. Choi, K. Shahid, and S. Savarese. What are they doing? : Collective activity classification using
spatio-temporal relationship among people. In 9th International Workshop on Visual Surveillance, 2009.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proc. IEEE Comput.
Soc. Conf. Comput. Vision and Pattern Recogn., 2005.
[5] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. In IEEE
International Conference on Computer Vision, 2009.
[6] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for static human-object interactions. In
Workshop on Structured Models in Computer Vision, 2010.
[7] T.-M.-T. Do and T. Artieres. Large margin training for hidden markov models with partially observed
states. In International Conference on Machine Learning, 2009.
[8] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2008.
[9] A. Gupta, A. Kembhavi, and L. S. Davis. Observing human-object interactions: Using spatial and functional compatibility for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,
31(10):1775?1789, 2009.
[10] D. Han, L. Bo, and C. Sminchisescu. Selection and context for action recognition. In IEEE International
Conference on Computer Vision, 2009.
[11] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In European Conference on
Computer Vision, 2008.
[12] M. Marszalek, I. Laptev, and C. Schmid. Actions in context. In IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, 2009.
[13] K. P. Murphy, A. Torralba, and W. T. Freeman. Using the forest to see the trees: A graphicsl model
relating features, objects, and scenes. In Advances in Neural Information Processing Systems, volume 16.
MIT Press, 2004.
[14] J. C. Niebles, C.-W. Chen, , and L. Fei-Fei. Modeling temporal structure of decomposable motion segments for activity classification. In European Conference of Computer Vision, 2010.
[15] A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell. Hidden conditional random fields. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 29(10):1848?1852, June 2007.
[16] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In IEEE
International Conference on Computer Vision, 2007.
[17] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In 17th
International Conference on Pattern Recognition, 2004.
[18] A. Vedaldi and A. Zisserman. Structured output regression for detection with partial truncation. In
Advances in Neural Information Processing Systems. MIT Press, 2009.
[19] Y. Wang and G. Mori. Max-margin hidden conditional random fields for human action recognition. In
Proc. IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recogn., 2009.
[20] Y. Wang and G. Mori. A discriminative latent model of image region and object tag correspondence. In
Advances in Neural Information Processing Systems (NIPS), 2010.
[21] Y. Wang and G. Mori. A discriminative latent model of object classes and attributes. In European
Conference on Computer Vision, 2010.
[22] W. Yang, Y. Wang, and G. Mori. Recognizing human actions from still images with latent poses. In
CVPR, 2010.
[23] B. Yao and L. Fei-Fei. Grouplet: a structured image representation for recognizing human and object
interactions. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, San
Francisco, CA, June 2010.
[24] B. Yao and L. Fei-Fei. Modeling mutual context of object and human pose in human-object interaction
activities. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, San
Francisco, CA, June 2010.
[25] C.-N. Yu and T. Joachims. Learning structural SVMs with latent variables. In International Conference
on Machine Learning, 2009.
9
| 4158 |@word dalal:1 triggs:1 solid:1 shechtman:1 contains:1 score:2 selecting:1 ours:2 outperforms:3 blank:1 contextual:10 written:2 wiewiora:1 shape:1 hofmann:1 remove:1 alone:2 cue:1 fewer:2 intelligence:2 plane:1 provides:2 node:5 five:4 along:1 zkj:1 incorrect:1 introduce:3 pairwise:2 acquired:1 x0:10 behavior:1 uiuc:1 multi:3 inspired:2 freeman:1 automatically:7 provided:2 xx:2 notation:1 bounded:1 what:1 interpreted:1 unified:2 unobserved:1 temporal:2 sky:1 every:2 stuff:2 nutshell:1 classifier:2 demonstrates:1 ramanan:3 before:2 local:1 treat:1 tends:2 subscript:1 marszalek:1 approximately:3 might:3 twice:1 dynamically:1 challenging:1 w0a:2 co:1 tian:1 range:1 camera:1 atomic:1 area:2 w4:1 significantly:3 vedaldi:2 pre:3 word:7 seeing:2 get:1 cannot:1 tsochantaridis:1 selection:1 context:15 optimize:3 layout:1 attention:1 convex:1 formulate:1 resolution:1 decomposable:1 immediately:1 coordinate:1 hurt:1 exact:1 crossing:11 recognition:19 particularly:1 jk:2 walking:4 artieres:1 observed:2 role:1 bottom:1 wang:6 capture:4 solved:1 region:3 cycle:2 connected:2 desai:2 movement:1 intuition:1 ideally:1 trained:1 solving:1 segment:1 laptev:2 serve:1 easily:1 represented:3 various:2 recogn:2 surrounding:2 train:2 describe:5 tell:1 neighborhood:6 h0:1 heuristic:4 solve:4 cvpr:1 relax:1 otherwise:3 favor:3 jointly:5 propose:2 interaction:45 fr:1 combining:1 deformable:1 exploiting:3 darrell:1 produce:1 object:21 help:3 develop:2 andrew:1 fixing:2 pose:10 school:3 received:1 eq:19 strong:2 soc:2 c:1 involves:3 indicate:5 come:1 direction:3 correct:2 attribute:2 filter:1 human:16 mcallester:1 hx:2 assign:1 niebles:1 summation:1 im:1 clarify:1 considered:3 ground:1 predict:1 visualize:1 achieves:1 adopt:1 torralba:1 proc:2 bag:8 label:34 combinatorial:1 create:1 successfully:1 mit:2 clearly:1 always:1 rather:1 hj:17 surveillance:1 focus:3 june:3 joachim:1 vk:1 properly:1 indicates:2 hk:7 baseline:6 sense:1 inference:13 entire:1 typically:2 integrated:1 hidden:5 koller:1 i1:1 compatibility:7 overall:5 among:6 arg:3 classification:8 denoted:1 issue:1 spatial:3 constrained:1 mutual:1 field:2 once:1 represents:2 look:3 yu:1 np:2 report:2 piecewise:2 oriented:1 recognize:2 individual:17 murphy:1 ourselves:1 connects:1 detection:3 introduces:1 light:2 held:1 bundle:1 chain:1 predefined:1 accurate:1 edge:4 integral:2 gyn:4 closer:1 partial:1 tree:12 euclidean:2 savarese:1 instance:2 column:1 modeling:4 measuring:1 rabinovich:1 maximization:1 loopy:4 cost:1 vertex:5 subset:2 entry:1 recognizing:8 front:7 person:87 international:8 together:3 connecting:1 yao:2 w1:5 choose:1 hn:5 conf:2 style:2 potential:11 gy:15 sec:5 explicitly:1 vi:1 queuing:10 try:1 performed:2 root:6 h1:2 doing:2 observing:1 red:1 aggregation:1 simon:3 amorphous:1 contribution:2 greg:1 accuracy:4 descriptor:4 yellow:1 raw:1 detector:1 quattoni:1 obvious:1 associated:4 static:1 dataset:4 popular:1 color:2 fractional:1 infers:1 back:7 xxx:1 zisserman:1 formulation:2 done:1 though:1 evaluated:1 box:1 correlation:1 schuldt:1 hand:1 multiscale:1 propagation:1 indicated:2 believe:1 zjk:8 normalized:1 galleguillos:1 hence:1 assigned:1 irani:1 iteratively:1 i2:1 illustrated:1 round:1 during:2 davis:1 demonstrate:5 confusion:2 performs:1 motion:1 image:27 recently:1 functional:1 volume:1 relating:1 refer:1 in6:1 unconstrained:1 similarly:2 illinois:1 han:1 etc:4 closest:2 own:1 recent:1 imbalanced:1 irrelevant:1 scenario:1 certain:3 yi:2 exploited:3 minimum:5 additional:2 forty:1 maximize:1 dashed:2 ii:2 multiple:1 infer:6 champaign:1 cross:1 fraser:3 prediction:1 regression:1 vision:15 iteration:1 represent:5 kernel:1 sometimes:1 histogram:1 cell:1 addition:2 want:1 crucial:1 w2:4 rest:2 ascent:1 sure:1 tend:1 undirected:2 thing:1 member:1 effectiveness:1 call:5 integer:2 structural:1 yang:3 revealed:1 easy:1 xj:6 gorelick:1 w3:4 restrict:2 opposite:2 knowing:4 br:1 enumerating:1 whether:2 hxn:1 returned:1 proceed:1 speaking:1 action:59 enumerate:3 useful:1 iterating:1 clear:1 clutter:1 clip:3 processed:1 svms:2 category:7 visualized:1 outperform:1 per:4 waiting:4 group:43 key:1 four:1 terminology:1 lan:1 threshold:1 w2ab:1 tenth:1 graph:33 relaxation:2 subgradient:3 sum:1 parameterized:5 you:1 fourth:1 throughout:1 decide:1 groundtruth:1 sfu:3 draw:1 layer:2 hi:2 fl:1 distinguish:1 correspondence:2 quadratic:2 activity:55 constraint:2 fei:6 bp:3 scene:7 hy:15 tag:2 min:2 performing:1 subgradients:2 department:1 structured:7 developing:1 according:1 combination:1 describes:1 across:1 increasingly:1 lp:2 maxy:1 intuitively:1 mori:6 ln:7 visualization:2 turn:2 ilp:2 tractable:1 end:2 w1b:2 apply:1 observe:1 eight:1 enforce:2 occurrence:1 fowlkes:2 alternative:1 original:2 assumes:1 running:1 include:1 top:1 kembhavi:1 graphical:5 build:1 society:2 bl:1 objective:2 added:1 ofwords:1 interacts:1 diagonal:1 gradient:1 distance:2 concatenation:1 majority:1 street:1 w0:3 argue:1 extent:1 spanning:5 modeled:1 relationship:1 illustration:1 equivalently:2 holding:2 hog:2 collective:6 twenty:2 perform:1 imbalance:1 markov:1 urbana:1 frame:6 rn:7 inferred:5 introduced:1 pair:11 connection:11 learned:4 nip:1 beyond:1 below:1 pattern:9 xm:1 sparsity:1 summarize:1 program:1 built:1 max:20 including:2 video:5 belief:1 difficulty:1 natural:1 predicting:1 indicator:1 representing:1 improve:1 hm:2 extract:1 schmid:1 understanding:1 loss:2 highlight:1 discriminatively:1 facing:8 h2:2 incurred:1 degree:2 classifying:1 row:4 course:1 last:1 truncation:1 allow:1 comprehensively:1 taking:2 felzenszwalb:1 differentiating:1 sparse:2 benefit:2 calculated:2 xn:8 heitz:1 adaptive:7 jump:1 san:2 far:1 transaction:2 emphasize:1 implicitly:1 cutting:1 basri:1 global:8 decides:1 belongie:1 francisco:2 spatio:1 discriminative:9 xi:4 search:1 latent:18 table:3 disambiguate:1 learn:2 ca:5 ignoring:1 forest:1 caputo:1 sminchisescu:1 grouplet:1 complex:2 european:3 vj:1 main:2 whole:2 big:1 nothing:1 x1:1 fig:11 aid:1 inferring:1 comput:4 third:2 choi:1 operationalize:1 explored:2 svm:8 gupta:1 evidence:1 naively:1 workshop:2 importance:2 margin:2 chen:1 simply:3 likely:1 visual:3 partially:1 bo:1 talking:13 collectively:1 corresponds:3 truth:1 determines:1 extracted:3 conditional:2 goal:1 viewed:1 hard:5 fw:12 experimentally:1 included:1 except:1 called:1 morency:1 experimental:4 select:1 people:5 support:1 collins:1 evaluate:1 tested:1 |
3,488 | 4,159 | Inductive Regularized Learning of Kernel Functions
Prateek Jain
Microsoft Research Bangalore
Bangalore, India
[email protected]
Brian Kulis
UC Berkeley EECS and ICSI
Berkeley, CA, USA
[email protected]
Inderjit Dhillon
UT Austin Dept. of Computer Sciences
Austin, TX, USA
[email protected]
Abstract
In this paper we consider the problem of semi-supervised kernel function learning. We first propose a general regularized framework for learning a kernel matrix,
and then demonstrate an equivalence between our proposed kernel matrix learning framework and a general linear transformation learning problem. Our result
shows that the learned kernel matrices parameterize a linear transformation kernel
function and can be applied inductively to new data points. Furthermore, our result gives a constructive method for kernelizing most existing Mahalanobis metric
learning formulations. To make our results practical for large-scale data, we modify our framework to limit the number of parameters in the optimization process.
We also consider the problem of kernelized inductive dimensionality reduction in
the semi-supervised setting. To this end, we introduce a novel method for this
problem by considering a special case of our general kernel learning framework
where we select the trace norm function as the regularizer. We empirically demonstrate that our framework learns useful kernel functions, improving the k-NN classification accuracy significantly in a variety of domains. Furthermore, our kernelized dimensionality reduction technique significantly reduces the dimensionality
of the feature space while achieving competitive classification accuracies.
1
Introduction
Learning kernel functions is an ongoing research topic in machine learning that focuses on learning
an appropriate kernel function for a given task. While several methods have been proposed, many
of the existing techniques can only be applied transductively [1?3]; i.e., they cannot be applied
inductively to new data points. Of the methods that can be applied inductively, several are either too
computationally expensive for large-scale data (e.g. hyperkernels [4]) or are limited to small classes
of possible learned kernels (e.g. multiple kernel learning [5]).
In this paper, we propose and analyze a general kernel matrix learning problem using provided sideinformation over the training data. Our learning problem regularizes the desired kernel matrix via
a convex regularizer chosen from a broad class, subject to convex constraints on the kernel. While
the learned kernel matrix should be able to capture the provided side-information well, it is not
clear how the information can be propagated to new data points. Our first main result demonstrates
that our kernel matrix learning problem is equivalent to learning a linear transformation (LT) kernel
function (a kernel of the form ?(x)T W ?(y) for some matrix W ? 0) with a specific regularizer.
With the appropriate representation of W , this result implies that the learned LT kernel function can
be naturally applied to new data. Additionally, we demonstrate that a large class of Mahalanobis
metric learning methods can be seen as learning an LT kernel function and so our result provides a
1
constructive method for kernelizing these methods. Our analysis recovers some recent kernelization
results for metric learning, but also implies several new results.
As our proposed kernel learning formulation learns a kernel matrix over the training points, the
memory requirements scale quadratically in the number of training points, a common issue arising
in kernel methods. To alleviate such issues, we propose an additional constraint to the learning
formulation to reduce the number of parameters. We prove that the equivalence to LT kernel function
learning still holds with the addition of this constraint, and that the resulting formulation can be
scaled to very large data sets.
We then focus on a novel application of our framework to the problem of inductive semi-supervised
kernel dimensionality reduction. Our method is a special case of our kernel function learning
framework with trace-norm as the regularization function. As a result, we learn low-rank linear
transformations, which correspond to low-dimensional embeddings of high- or infinite-dimensional
kernel embeddings; unlike previous kernel dimensionality methods, which are either unsupervised
(kernel-PCA) or cannot easily be applied inductively to new data (spectral kernels [6]), our method
intrinsically possesses both desirable properties. Furthermore, our method can handle a variety of
side-information, e.g., class labels, click-through rates, etc. Finally, we validate the effectiveness of
our proposed framework. We quantitatively compare several regularizers, including the trace-norm
regularizer for dimensionality reduction, over standard data sets. We also apply the methods to an
object recognition task in computer vision and qualitatively show results of dimensionality reduction
on a handwritten digits data set.
Related Work: Most of the existing kernel learning methods can be classified into two broad categories. The first category includes parametric approaches, where the learned kernel function is
restricted to be of a specific form and then the relevant parameters are learned according to the provided data. Prominent methods include multiple kernel learning [5], hyperkernels [4], infinite kernel
learning [7], and hyper-parameter cross-validation [8]. Most of these methods either lack modeling
flexibility, require non-convex optimization, or are restricted to a supervised learning scenario. The
second category includes non-parametric methods, which explicitly model geometric structure in the
data. Examples include spectral kernel learning [6], manifold-based kernel learning [9], and kernel
target alignment [3]. However, most of these approaches are limited to the transductive setting and
cannot be used to naturally generalize to new points. In comparison, our method combines both of
the above approaches. We propose a general non-parametric kernel matrix learning framework, similar to methods of the second category. However, we show that our learned kernel matrix corresponds
to a linear transformation kernel function parameterized by a PSD matrix. Hence, our method can
be applied to inductive settings also without sacrificing significant modeling power. Furthermore,
our methods can be applied to a variety of domains and with a variety of forms of side-information.
Existing work on learning linear transformations has largely focused on learning Mahalanobis distances; examples include [10?15], among others. POLA [13] and ITML [12] provide specialized
kernelization techniques for their respective metric learning formulations. Kernelization of LMNN
was discussed in [16], though it relied on a convex perturbation based formulation that can lead
to suboptimal solutions. Recently, [17] showed kernelization for a class of metric learning algorithms including LMNN and NCA [15]; as we will see, our result is more general and we can prove
kernelization over a larger class of problems and can also reduce the number of parameters to be
learned. Independent of our work, [18] recently proved a representer type of theorem for spectral
regularization functions. However, the framework they consider is different than ours in that they
are interested in sensing the underlying high-dimensional matrix using given measurements.
Kernel dimensionality reduction methods can generally be divided into two categories: 1) semisupervised dimensionality reduction in the transductive setting, 2) supervised dimensionality reduction in the inductive setting. Methods in the first category include the incomplete Cholesky decomposition [19], colored maximum variance unfolding [20], manifold preserving semi-supervised
dimensionality reduction [21]. Methods in the second category include the kernel dimensionality reduction method [22] and Gaussian Process latent variable models [23]. Kernel PCA [24] reduces the
dimensionality in the inductive unsupervised setting, while various manifold learning methods can
reduce the dimensionality but only in the unsupervised transductive setting. In contrast, our dimensionality reduction method, which is an instantiation of our general kernel learning framework, can
perform kernel dimensionality reduction simultaneously in both the semi-supervised as well as the
inductive setting. Additionally, it can capture the manifold structure using an appropriate baseline
kernel function such as the one proposed by [25].
2
2
Learning Framework
Given an input kernel function ? : Rd ? Rd ? R, and some side-information over a set of points
X = {x1 , x2 , . . . , xn } the goal is to learn a new kernel function ?W that is regularized against
? but incorporates the provided side-information (the use of the subscript W will become clear
later). The initial kernel function ? is of the form ?(x, y) = ?(x)T ?(y) for some mapping ?.
Throughout the rest of this paper, we will denote ?i as shorthand for ?(xi ), i.e., data point xi after
applying the mapping ?. We will also assume that the data vectors in X have been mapped via ?,
resulting in ? = {?1 , ?2 , . . . , ?n }. Learning a kernel function from the provided side-information
is an ill-posed problem since infinitely many such kernels can satisfy the provided supervision. A
common approach is to formulate a transductive learning problem to learn a new kernel matrix over
the training data. Denoting the input kernel matrix K as K = ?T ?, we aim to learn a new kernel
matrix KW that is regularized against K while satisfying the available side-information. In this
work, we study the following optimization problem:
min f (K ?1/2 KW K ?1/2 )
s.t. gi (KW ) ? bi , 1 ? i ? m,
(1)
KW ?0
where f and gi are functions from Rn?n ? R. We call f the regularizer and the gi the constraints.
Note that if f and constraints gi ?s are all convex functions, then the above problem can be solved
optimally using standard convex optimization algorithms. Note that our results will also hold for
unconstrained variants of the above problem, as well as variants that incorporate slack variables.
In general, such learning formulations are limited in that the learned kernel cannot readily be applied
to new data points. However, we will show that the above proposed problem is equivalent to learning
linear transformation (LT) kernel functions. Formally, an LT kernel function ?W is a kernel function
of the form ?W (x, y) = ?(x)T W ?(y), where W is a positive semi-definite (PSD) matrix; we can
think of the LT kernel as describing the linear transformation ?i ? W 1/2 ?i . A natural way to
learn an LT kernel function would be to learn the parameterization matrix W using the provided
side-information. To this end, we consider the following problem:
min f (W )
s.t. gi (?T W ?) ? bi , 1 ? i ? m,
(2)
W ?0
where, as before, the function f is the regularizer and the functions gi are the constraints that encode
the side information. The constraints gi are assumed to be a function of the matrix ?T W ? of learned
kernel values over the training data. We make two observations about this problem: first, for data
mapped to high-dimensional spaces via kernel functions, this problem is seemingly impossible to
optimize since the size of W grows quadratically with the dimensionality. We will show that (2)
need not explicitly be solved for learning an LT kernel function. Second, most Mahalanobis metric
learning methods may be viewed as a special case of the above framework, and we will discuss some
of them throughout the paper.
2.1
Examples of Regularizers and Constraints
To make the kernel learning optimization problem concrete, we discuss a few examples of possible
regularizers and constraints.
For the regularizer f (A) = 12 ?A ? I?2F , the resulting kernel learning objective can be equivalently
expressed as minimizing 12 ?K ?1 KW ? I?2F . Thus, the goal is to keep the learned kernel close to the
input kernel subject to the constraints in gi . Similarly, for f (A) = tr(A ? I), the resulting objective
can be expressed as minimizing tr(K ?1 KW ?I). Another interesting regularizer is f (A) = tr(A)?
log det(A). In this case, the resulting objective is to minimize the LogDet divergence D?d (KW , K)
subject to the constraints given by gi . For linear gi , this problem was studied in [12, 26].
In terms of constraints, pairwise squared Euclidean distance constraint between a pair of points
(?i , ?j ) in feature space can be formulated as KW (i, i) + KW (j, j) ? 2KW (i, j) ? b or
KW (i, i) + KW (j, j) ? 2KW (i, j) ? b; this constraint is clearly linear in the entries of KW .
Similarity constraints can be represented as KW (i, j) ? b or KW (i, j) ? b and are also linear in
KW . Relative distance constraints over a triplet (?i , ?j , ?k ) specify that ?i should be closer to ?j
than ?k , and are often used in metric learning formulations and ranking problems; such constraints
can be easily formulated within our framework. Finally, non-parametric probability estimation constraints can be used to constrain the conditional probability of a class c given a data point ?i ,
?
j?c KW (i, j)
?p(c|x) = ? ?C ?
? b,
t=1
j?t KW (i, j)
3
where C is the number of classes. This constraint can be written as a linear constraint over KW
after appropriate manipulation.
3
Analysis
We are now ready to analyze the connection between problems (1) and (2). We will show that
the solutions to the two problems are equivalent, in the sense that by optimally solving one of the
problems, the solution to the other can be computed in closed form. More importantly, this result
will yield insight into the type of kernel that is learned by the kernel learning problem.
We begin by defining the class of regularizers considered in our analysis. Note that each of the
example regularizers discussed earlier satisfy the following definition of spectral functions.
?
Definition 3.1. We say that f : Rn?n ? R is a spectral function if f (A) = i fs (?i ), where
?1 , ..., ?n are the eigenvalues of A and fs : R ? R is a real-valued function over the reals. Note
that if fs is a convex function over the reals, then f is also convex.
3.1
Learning Linear Transformation Kernels
Now we present our main result, i.e., for a spectral function f , problems (1) and (2) are equivalent.
Theorem 1. Let K ? 0 be an invertible matrix, f be a spectral function and denote the global
?
minima of the corresponding scalar function fs as ?. Let W ? be an optimal solution to (2) and KW
be an optimal solution to (1). Then,
W ? = ?I + ?S ? ?T ,
?
?
where S ? = K ?1 (KW
? ?K)K ?1 . Furthermore, KW
= ?T W ? ?.
?
The first part of the theorem demonstrates that, given an optimal solution KW
to (1), one can construct the corresponding solution W ? to (2), while the second part shows the reverse (this also
demonstrates why W is used in the subscript of the learned kernel). The proof of this theorem
appears in the supplementary material. The main idea behind the proof is to first show that the optimal solution to (2) is always of the form W = ?I + ?S?T , and then we obtain the closed form
expression for S using algebraic manipulations.
As a first consequence of this result, we can achieve induction over the learned kernels. Given
that KW = ?T W ?, we can see that the learned kernel function is a linear transformation kernel;
that is, ?W (?i , ?j ) = ?Ti W ?j . Given a pairs of new data points ?n1 and ?n2 , we use the fact
that the learned kernel is a linear transformation kernel, along with the first result of the theorem
(W = ?I + ?S?T ) to compute the learned kernel as:
n
?
?W (xn1 , xn2 ) = ?Tn1 W ?n2 = ??(xn1 , xn2 ) +
Sij ?(xn1 , xi )?(xj , xn2 ).
(3)
i,j=1
As mentioned in Section 2, many Mahalanobis metric learning methods can be viewed as a special
case of (2). Therefore, a corollary of Theorem 1 is that we can constructively apply these metric
learning methods in kernel space by solving their corresponding kernel learning problem, and then
compute the learned metrics via (3). Thus, W need not explicitly be constructed to learn the LT kernel. Kernelization of Mahalanobis metric learning has previously been established for some special
cases; our results generalize and extend previous methods, as well as provide simpler techniques in
some cases. Below, we elaborate with some special cases.
Example 1 [Information Theoretic Metric Learning (ITML)]: [12] proposed the following Mahalanobis metric learning problem formulation:
min Tr(W ) ? log det(W ),
W ?0
s.t.
dW (?i , ?j ) ? bij , (i, j) ? S,
dW (?i , ?j ) ? bij , (i, j) ? D,
where S and D specify pairs of similar and dissimilar points, respectively, and dW (?i , ?j ) =
(?i ? ?j )T W (?i ? ?j ) is the Mahalanobis distance between ?i and ?j . ITML is an instantiation
of our framework with regularizer f (A) = tr(A) ? log det(A) and pairwise distance constraints
encoded as the gi functions. Furthermore, it is straightforward to show that f is a convex spectral
function with global optima ? = 1, so the optimal W can be learned implicitly using (1). The
corresponding kernel learning optimization problem simplifies to:
min
KW
gi (KW ) ? bi , 1 ? i ? m,
D?d (KW , K) s.t.
4
(4)
where D?d (KW , K) = tr(KW K ?1 )?log det(KW K ?1 )?n is the LogDet divergence [12], and the
positive definiteness of KW is satisfied automatically. This recovers the kernelized metric learning
problem analyzed in [12], where kernelization for this special case was established and an iterative
projection algorithm for optimization was developed. Note that, in the analysis of [12], the gi were
limited to similarity and dissimilarity constraints; our result is therefore more general than the existing kernelization result, even for this special case.
Example 2 [Pseudo Online Metric Learning (POLA)]: [13] proposed the following metric learning formulation:
min ?W ?2F , s.t. yij (b ? dW (?i , ?j )) ? 1, ?(i, j) ? P,
W ?0
where yij = 1 if ?i and ?j are similar, and yij = ?1 if ?i and ?j are dissimilar. P is a set
of pairs of points with known distance constraints. POLA is an instantiation of (2) with f (A) =
1
2
2 ?A?F and side-information available in the form of pair-wise distance constraints. Note that the
regularizer f (A) = 21 ?A?2 was also employed in [2, 27], and these methods also fall under our
general formulation. In this case, f is once again a convex spectral function, and its global minima
is ? = 0, so we can use (1) to solve for the learned kernel KW as
min ?KW K ?1 ?2F s.t.
gi (KW ) ? bi , 1 ? i ? m, KW ? 0.
(5)
KW
The constraints gi for this problem can be easily constructed by re-writing each of POLA?s constraints as a function of ?T W ?. Note that the above approach for kernelization is much simpler
than the method suggested in [13], which involves a kernelized Gram-Schmidt procedure at each
step of the algorithm.
Other Examples: The above two examples show that our analysis recovers two well-known kernelization results for Mahalanobis metric learning. However, there are several other metric learning
approaches that fall into our framework as well, including the large margin nearest neighbor metric learning method (LMNN) [11] and maximally collapsing metric learning (MCML) [14], both
of which can be seen as instantiations of our learning framework with a constant f , as well as relevant component analysis (RCA) [28] and Xing et al.?s Mahalanobis metric learning method for
clustering [10]. Given lack of space, we cannot detail the kernelization of all these methods, but
they follow in the same manner as in the above two examples. In particular, each of these methods
may be run in kernel space, and our analysis yields new insights into these methods; for example,
kernelization of LMNN [11] using Theorem 1 avoids the convex perturbation analysis in [16] that
leads to suboptimal solutions in some cases.
3.2
Parameter Reduction
One of the drawbacks to Theorem 1 is that the size of the matrices KW and S are n ? n, and thus
grow quadratically with the number of data points. We would like to have a way to restrict our
optimization over a smaller number of parameters, so we now discuss a generalization of (2) by
introducing an additional constraint to make it possible to reduce the number of parameters to learn,
permitting scalability to data sets with many training points and with very high dimensionality.
?
Theorem 1 shows that the optimal KW
is of the form ?T W ? ? = ?K + KS ? K. In order to
accommodate fewer parameters to learn, a natural option is to replace the unknown S matrix with a
low-rank matrix JLJ T , where J ? Rn?r is a pre-specified matrix, L ? Rr?r is unknown (we use
L instead of S to emphasize that S is of size n ? n whereas L is r ? r), and the rank r is a parameter
of the algorithm. Then, we will explicitly enforce that the learned kernel is of this form.
By plugging in KW = ?K + KSK into (1) and replacing S with JLJ T , the resulting optimization
problem is given by:
min f (?I + K 1/2 JLJ T K 1/2 )
s.t. gi (?K + KJLJ T K) ? bi , 1 ? i ? m.
(6)
L?0
While the above problem involves just r ? r variables, the functions f and gi ?s are applied to n ? n
matrices and therefore the problem may still be computationally expensive to optimize. Below, we
show that for any spectral function f and linear constraints gi (KW ) = Tr(Ci KW ), (6) reduces to a
problem that applies f and gi ?s to r ? r matrices only, which provides significant scalability.
Theorem 2. Let K = ?T ? ? 0 and J ? Rn?r . Also, let the regularization function f be a spectral
function (see Definition 3.1) such that the corresponding scalar function fs has a global minima at
?. Then problem (6) is equivalent to the following problem:
f ((K J )?1/2 (?K J + K J LK J )(K J )?1/2 ),
min
L???(K J )?1
s.t. Tr(LJ T KCi KJ) ? bi ? Tr(?KCi ), 1 ? i ? m.
5
(7)
Note that (7) is over r ? r matrices (after initial pre-processing) and is in fact similar to the kernel
learning problem (1), but with a kernel K J of smaller size r ? r, r ? n. A proof of the above
theorem is in the supplementary material, and follows by showing that for spectral functions the
objective functions of the two problems can be shown to differ by a universal constant.
Similar to (1), we can show that (6) is also equivalent to linear transformation kernel function learning. This enables us to naturally apply the above kernel learning problem in the inductive setting.
We provide a proof of the following theorem in the supplementary material.
Theorem 3. Consider (6) with a spectral function f so that corresponding scalar function fs has a
global minima at ? and let K ? 0 be invertible. Then, (6) and (7) are equivalent to the following
linear transformation kernel learning problem (analogous to the connection between (1) and (2)):
min
W ?0,L
f (W )
s.t.
Tr(?T W ?) ? bi , 1 ? i ? m,
W = ?I + XJLJX T .
(8)
Note that, in contrast to (2), where the last constraint over W is achieved automatically, (8) requires
that constraint should be satisfied during the optimization process which leads to a reduced number
of parameters for our kernel learning problem. The above theorem shows that our reduced parameters kernel learning method (6) also implicitly learns a linear transformation kernel function, hence
we can generalize the learned kernel to unseen data points using an expression similar to (3).
The parameter reduction approach presented in this section depends critically on the choice of J.
A few simple heuristics for choosing J beyond choosing a subset of the points from ? include
a randomly sampled coefficient matrix or clustering ? into r clusters such that J is the cluster
membership indicator function. Also note that using this parameter reduction technique, we can
scale the optimization to kernel learning problems with millions of points of more. For example,
we have applied a special case of this scalable framework to learn kernels over data sets containing
nearly half a million images, as well as the MNIST data set of 60,000 data points [29].
4 Trace-norm based Inductive Semi-supervised Kernel Dimensionality
Reduction (Trace-SSIKDR)
We now consider applying our framework to the scenario of semi-supervised kernel dimensionality
reduction, which provides a novel and practical application of our framework. While there exists a
variety of methods for kernel dimensionality reduction, most of these methods are unsupervised (e.g.
kernel-PCA) or are restricted to the transductive setting. In contrast, we can use our kernel learning
framework to learn a low-rank transformation of the feature vectors implicitly that in turn provides
a low-dimensional embedding of the dataset. Furthermore, our framework permits a variety of sideinformation such as pair-wise or relative distance constraints, beyond the class label information
allowed by existing transductive methods.
We describe our method starting from the linear transformation problem. Our goal is to learn a lowrank linear transformation W whose corresponding low-dimensional mapped embedding of ?i is
W 1/2 ?i . Even when the dimensionality of ?i is very large, if the rank of W is low enough, then the
mapped embedding will have small dimensionality. With that in mind, a possible regularizer could
be the rank, i.e., f (A) = rank(A); one can easily show that this satisfies the definition of a spectral
function. Unfortunately, optimization is intractable in general with the non-convex rank function,
so we use the trace-norm relaxation for the matrix rank function, i.e., we set f (A) = Tr(A). This
function has been extensively studied as a relaxation for the rank function [30], and it satisfies the
definition of a spectral function (with ? = 0). We also add a small Frobenius norm regularization
for ease of optimization (this does not affect the spectral property of the regularization function).
Then using Theorem 1, the resulting relaxed kernel learning problem is:
min ? Tr(K ?1/2 KW K ?1/2 ) + ?K ?1/2 KW K ?1/2 ?2F
KW ?0
s.t. Tr(Ci KW ) ? bi , 1 ? i ? m, (9)
where ? > 0 is a parameter. The above problem can be solved using a method based on Uzawa?s
inexact algorithm, similar to [31].
? =
We briefly describe the steps taken by our method at each iteration. For simplicity, denote K
? instead of KW . Let K
? t be the t-th iterate.
K ?1/2 KW K ?1/2 ; we will optimize with respect to K
Associate variable zit , 1 ? i ? m with each constraint at each iteration t, and let zi0 = 0, ?i. Let ?t
6
Table 1: UCI Datasets: accuracy achieved by various methods. The numbers in parentheses show
the rank of the corresponding learned kernels. Trace-SSIKDR achieves accuracy comparable to Frob
(Frobenius norm regularization) and ITML (LogDet regularization) with a significantly smaller rank.
Dataset\Method
Iris
Wine
Ionosphere
Soybean
Diabetes
Balance-scale
Breast-cancer
Spectf-heart
Heart-c
Heart-h
Gaussian
0.99(40)
0.80(105)
0.94(337)
0.89(624)
0.75(251)
0.93(156)
0.72(259)
0.74(267)
0.68(228)
0.59(117)
Frob
0.99(27)
0.94(36)
0.98(64)
0.96(96)
0.74(154)
0.96(106)
0.73(61)
0.87(39)
0.78(62)
0.69(71)
ITML
0.99(40)
0.99(105)
0.98(337)
0.96(624)
0.76(251)
0.97(156)
0.78(259)
0.84(267)
0.79(228)
0.70(117)
Frob LR
0.91(4)
0.72(11)
0.98(19)
0.44(40)
0.67(14)
0.97(10)
0.69(21)
0.84(22)
0.73(39)
0.56(31)
ITML LR-pre
0.93(4)
0.85(11)
0.98(19)
0.87(40)
0.62(14)
0.80(10)
0.68(21)
0.89(22)
0.61(39)
0.30(31)
ITML LR-post
0.99(4)
0.46(11)
0.93(19)
0.35(40)
0.73(14)
0.82(10)
0.68(21)
0.89(22)
0.55(39)
0.56(31)
Trace-SSIKDR
0.99(4)
0.94(11)
0.99(19)
0.96(40)
0.74(14)
0.97(10)
0.75(21)
0.84(22)
0.78(39)
0.68(31)
be the step size at iteration t. The algorithm performs the following updates:
(?
)
? t ? U max(? ? ? I, 0)U T ,
U ?U T ? K 1/2
zit?1 Ci K 1/2 ,
K
i
? t K 1/2 ) ? bi , 0), ?i.
zit ? zit?1 ? ? max(Tr(Ci K 1/2 K
The above updates require computation of K 1/2 which is expensive for large high-rank matrices.
? and the learned kernel function
However, using elementary linear algebra we can show that K
1/2
? ?1/2 from
can be computed efficiently without computing K
by maintaining S = K ?1/2 KK
step to step. Algorithm 1 details an efficient method for optimizing (9) and returns matrices ?k ,
? t , which
Dk and Vk all of which are contain only O(nk) parameters, where k is the rank of K
changes from iteration to iteration. Note that step 4 of the algorithm computes k singular vectors
and requires O(nk 2 ). Since k is typically significantly smaller than n, the computational cost will
be significantly smaller than computing the whole SVD. Note that the learned embedding ?i ?
? 1/2 K ?1/2 ki , where ki is a vector of input kernel function values between ?i and the training
K
1/2
data, can be computed efficiently as ?i ? ?k Dk Vk ki , which does not require K 1/2 explicitly.
We defer the proof of correctness for Algorithm 1 to the supplementary material.
Algorithm 1 Trace-SSIKDR
Require: K, (Ci , bi ), 1 ? i ? m, ? , ?
1: Initialize: zi0 = 0, t = 0
2: repeat
3:
t=t+1
(? t?1 )
4:
Compute Vk and ?k , the top k eigenvectors and eigenvalues of
Ci K, where k =
i zi
argmaxj ?j > ?
5:
Dk (i, i) ? 1/viT Kvi , 1 ? i ? k
6:
zit ? zit?1 ? ? max(Tr(Ci KVk Dk ?k Dk VkT K) ? bi , 0), ?i.
//S t = Vk Dk ?k Dk VkT
7: until Convergence
8: Return ?k , Dk , Vk
5
Experimental Results
We now present empirical evaluation of our kernel learning framework and our semi-supervised
kernel dimensionality approach when applied in conjunction with k-nearest neighbor classification.
In particular, using different regularization functions, we show that our framework can be used to
obtain significantly better kernels than the baseline kernels for k-NN classification. Additionally,
we show that our semi-supervised kernel dimensionality reduction approach achieves comparable
accuracy while significantly reducing the dimensionality of the linear mapping.
UCI Datasets: First, we evaluate the performance of our kernel learning framework on standard
UCI datasets. We measure accuracy of the learned kernels using 5-NN classification with two-fold
cross validation averaged over 10 runs. For training, we use pairwise (dis)similarity constraints as
described in Section 2.1. We select parameters l and u (right-hand side of the pairwise constraints)
using 5th and 95th percentiles of all the pairwise distances between points from the training dataset.
7
Mean recognition accuracy
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Trace?SSIKDR
ITML
Frob
Frob LR
ITML LR?post
ITML LR?pre
Baseline
5
10
15
20
Training examples per class
Dimensionality of the learned mapping
Accuracy vs Training Set Size
0.8
Dimensionality vs Training Set Size
0.8
500
0.1
0.6
400
0.4
0.05
0.2
300
0
0
Trace?SSIKDR
Frob
200
?0.2
?0.05
?0.4
100
?0.1
?0.6
?0.8
0
?1
5
10
15
Training examples per class
20
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
?0.25
?0.2
?0.15
?0.1
?0.05
0
0.05
(a)
(b)
(c)
(d)
Figure 1: (a): Mean classification accuracy on Caltech101 dataset obtained by 1-NN classification
with learned kernels obtained by various methods. (b): Rank of the learned kernel functions obtained
by various methods. The rank of the learned kernel function is same as the reduced dimensionality
of the dataset. (c): Two-dimensional embedding of 2000 USPS digits obtained using our method
Trace-SSIKDR for a training set of just 100 USPS digits. Note that we use the inductive setting
here and the embedding is color coded according to the underlying digit. (d): Embedding of the
USPS digits dataset obtained using kernel-PCA.
Table 4 shows the 5-NN classification accuracies achieved by our kernel learning framework with
different regularization functions. Gaussian represents the baseline Gaussian kernel, Frob represents
an instantiation of our framework with Frobenius norm (f (A) = ?A?2F ) regularization, while ITML
corresponds to the LogDet regularization (f (A) = Tr(A) ? log det(A) ). For the latter case, our
formulation is same as formulation proposed by [12]. Note that for almost all the datasets (except
Iris and Diabetes), both Frob and ITML improve upon the baseline Gaussian kernel significantly.
We also compare our semi-supervised dimensionality reduction method Trace-SSIKDR (see Section 4) with baseline kernel dimensionality reduction methods Frob LR, ITML LR-pre, and ITML
LR-post. Frob LR reduces the rank of the learned matrix W (equivalently, it reduces the dimensionality) using Frobenius norm regularization by taking the top eigenvectors. Similarly, ITML LR-post
reduces the rank of the learned kernel matrix obtained using ITML by taking its top eigenvectors.
ITML LR-pre reduces the rank of the kernel function by reducing the rank of the training kernel matrix. The learned linear transformation W (or equivalently, the learned kernel function) should have
the same rank as that of training kernel matrix as the LogDet divergence preserves the range space
of the input kernel. We fix the rank of the learned W for Frob LR, ITML LR-pre, ITML LR-post as
the rank of the transformation W obtained by our Trace-SSIKDR method. Note that Trace-SSIKDR
achieves accuracies similar to Frob and ITML, while decreasing the rank significantly. Furthermore,
it is significantly better than the corresponding baseline dimensionality reduction methods.
Caltech-101: Next, we evaluate our kernel learning framework on the Caltech-101 dataset, a benchmark object recognition dataset containing over 3000 images. Here, we compare various methods
using 1-NN classification method and the accuracy is measured in terms of the mean recognition
accuracy per class. We use a pool of 30 images per class for our experiments, out of which a varying number of random images are selected for training and the remaining are used for testing the
learned kernel function. The baseline kernel function is selected to be the sum of four different
kernel functions: PMK [32], SPMK [33], Geoblur-1 and Geoblur-2 [34]. Figure 1 (a) shows the
accuracy achieved by various methods (acronyms represent the same methods as described in the
previous section). Clearly, ITML and Frob (which are specific instances of our framework) are able
to learn significantly more accurate kernel functions than the baseline kernel function. Furthermore,
our Trace-SSIKDR method is able to achieve reasonable accuracy while reducing the rank of the
kernel function significantly (Figure 1 (b)). Also note that Trace-SSIKDR achieves significantly
better accuracy than Frob LR, ITML LR-pre and ITML LR-post, although all of these methods have
the same rank as Trace-SSIKDR.
USPS Digits: Finally, we qualitatively evaluate our dimensionality reduction method on the USPS
digits dataset. Here, we train our method using 100 examples to learn a linear mapping to two
dimensions, i.e., a rank-2 matrix W . For the baseline kernel, we use the data-dependent kernel function proposed by [25] that also takes data?s manifold structure into account. We then embed 2000
(unseen) test examples into two dimensions using our learned low-rank transformation. Figure 1 (c)
shows the embedding obtained by our Trace-SSIKDR method, while Figure 1 (d) shows the embedding obtained by the kernel-PCA algorithm. Each point is color coded according to the underlying
digit. Note that our method is able to separate out most of the digits even in 2D, and is significantly
better than the embedding obtained using kernel-PCA.
Acknowledgements: This research was supported in part by NSF grant CCF-0728879.
8
References
[1] K. Tsuda, G. R?atsch, and M. K. Warmuth. Matrix exponentiated gradient updates for on-line learning and
Bregman projection. JMLR, 6:995?1018, 2005.
[2] J. T. Kwok and I. W. Tsang. Learning with idealized kernels. In ICML, 2003.
[3] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola. On kernel-target alignment. In NIPS, 2001.
[4] C. S. Ong, A. J. Smola, and R. C. Williamson. Learning the kernel with hyperkernels. JMLR, 6:1043?
1071, 2005.
[5] G. R. G. Lanckriet, N. Cristianini, P. L. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel
matrix with semidefinite programming. JMLR, 5:27?72, 2004.
[6] Xiaojin Zhu, Jaz Kandola, Zoubin Ghahramani, and John Lafferty. Nonparametric transforms of graph
kernels for semi-supervised learning. In Lawrence K. Saul, Yair Weiss, and L?eon Bottou, editors, NIPS,
volume 17, pages 1641?1648, 2005.
[7] Peter V. Gehler and Sebastian Nowozin. Let the kernel figure it out; principled learning of pre-processing
for kernel classifiers. In CVPR, pages 2836?2843, 2009.
[8] Matthias Seeger. Cross-validation optimization for large scale hierarchical classification kernel methods.
In NIPS, pages 1233?1240, 2006.
[9] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux, Jean-Francois Paiement, Pascal Vincent, and Marie
Ouimet. Learning eigenfunctions links spectral embedding and kernel PCA. Neural Computation,
16(10):2197?2219, 2004.
[10] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. J. Russell. Distance metric learning with application to clustering
with side-information. In NIPS, pages 505?512, 2002.
[11] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor
classification. In NIPS, 2005.
[12] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In ICML,
pages 209?216, 2007.
[13] S. Shalev-Shwartz, Y. Singer, and A. Y. Ng. Online and batch learning of pseudo-metrics. In ICML, 2004.
[14] A. Globerson and S. T. Roweis. Metric learning by collapsing classes. In NIPS, 2005.
[15] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood component analysis. In NIPS,
2004.
[16] B. Kulis, S. Sra, and I. S. Dhillon. Convex perturbations for scalable semidefinite programming. In
AISTATS, 2009.
[17] R. Chatpatanasiri, T. Korsrilabutr, P. Tangchanachaianan, and B. Kijsirikul. On kernelization of supervised
Mahalanobis distance learners, 2008.
[18] Andreas Argyriou, Charles A. Micchelli, and Massimiliano Pontil. On spectral learning. JMLR, 11:935?
953, 2010.
[19] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. In ICML, pages
33?40, 2005.
[20] L. Song, A. Smola, K. M. Borgwardt, and A. Gretton. Colored maximum variance unfolding. In NIPS,
pages 1385?1392, 2007.
[21] Y. Song, F. Nie, C. Zhang, and S. Xiang. A unified framework for semi-supervised dimensionality reduction. Pattern Recognition, 41(9):2789?2799, 2008.
[22] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel dimensionality reduction for supervised learning. In
NIPS, 2003.
[23] R. Urtasun and T. Darrell. Discriminative gaussian process latent variable model for classification. In
ICML, pages 927?934, 2007.
[24] S. Mika, B. Sch?olkopf, A. J. Smola, K. M?uller, M. Scholz, and G. R?atsch. Kernel pca and de-noising in
feature spaces. In NIPS, pages 536?542, 1998.
[25] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive to semi-supervised
learning. In ICML, pages 824?831, 2005.
[26] Brian Kulis, M?aty?as Sustik, and Inderjit S. Dhillon. Learning low-rank kernel matrices. In ICML, pages
505?512, 2006.
[27] Matthew Schultz and Thorsten Joachims. Learning a distance metric from relative comparisons. In NIPS,
2003.
[28] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning a mahalanobis metric from equivalence
constraints. JMLR, 6:937?965, 2005.
[29] P. Jain, B. Kulis, and K. Grauman. Fast image search for learned metrics. In CVPR, 2008.
[30] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization, 2007.
[31] J. Cai, E. J. Candes, and Z. Shen. A singular value thresholding algorithm for matrix completion, 2008.
[32] K. Grauman and T. Darrell. The Pyramid Match Kernel: Efficient learning with sets of features. Journal
of Machine Learning Research (JMLR), 8:725?760, April 2007.
[33] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In CVPR, pages 2169?2178, 2006.
[34] A. C. Berg and J. Malik. Geometric blur for template matching. In CVPR, pages 607?614, 2001.
9
| 4159 |@word kulis:6 briefly:1 norm:10 decomposition:2 elisseeff:1 tr:16 accommodate:1 reduction:25 initial:2 denoting:1 ours:1 existing:6 com:1 jaz:1 goldberger:1 written:1 readily:1 john:1 blur:1 enables:1 update:3 v:2 half:1 fewer:1 selected:2 parameterization:1 warmuth:1 lr:18 colored:2 provides:4 simpler:2 zhang:1 along:1 constructed:2 become:1 prove:2 shorthand:1 combine:1 manner:1 introduce:1 pairwise:5 salakhutdinov:1 lmnn:4 decreasing:1 automatically:2 considering:1 provided:7 begin:1 underlying:3 prateek:1 weinshall:1 developed:1 unified:1 transformation:20 pseudo:2 berkeley:3 ti:1 grauman:2 demonstrates:3 scaled:1 classifier:1 grant:1 positive:2 before:1 modify:1 limit:1 consequence:1 subscript:2 mika:1 studied:2 k:1 equivalence:3 ease:1 limited:4 zi0:2 scholz:1 bi:11 range:1 averaged:1 fazel:1 practical:2 nca:1 globerson:1 testing:1 definite:1 digit:9 procedure:1 pontil:1 universal:1 empirical:1 significantly:14 projection:2 matching:2 pre:9 zoubin:1 cannot:5 close:1 noising:1 applying:2 impossible:1 writing:1 optimize:3 equivalent:7 straightforward:1 starting:1 vit:1 convex:13 focused:1 sideinformation:2 formulate:1 simplicity:1 roux:1 shen:1 insight:2 importantly:1 nuclear:1 dw:4 embedding:11 handle:1 analogous:1 target:2 programming:2 olivier:1 diabetes:2 associate:1 jlj:3 lanckriet:1 expensive:3 recognition:5 satisfying:1 gehler:1 cloud:1 solved:3 capture:2 parameterize:1 tsang:1 russell:1 icsi:1 mentioned:1 principled:1 nie:1 inductively:4 cristianini:2 ong:1 solving:2 algebra:1 predictive:1 upon:1 learner:1 usps:5 easily:4 various:6 tx:1 represented:1 regularizer:11 train:1 jain:3 massimiliano:1 describe:2 fast:1 hyper:1 choosing:2 shalev:1 hillel:1 whose:1 encoded:1 larger:1 posed:1 valued:1 say:1 supplementary:4 solve:1 heuristic:1 cvpr:4 delalleau:1 niyogi:1 gi:19 unseen:2 transductive:7 think:1 seemingly:1 online:2 eigenvalue:2 rr:1 matthias:1 cai:1 propose:4 relevant:2 uci:3 flexibility:1 achieve:2 roweis:2 frobenius:4 validate:1 scalability:2 olkopf:1 convergence:1 cluster:2 requirement:1 optimum:1 francois:1 darrell:2 object:2 blitzer:1 completion:1 measured:1 nearest:3 lowrank:1 zit:6 c:1 involves:2 implies:2 differ:1 drawback:1 material:4 require:4 fix:1 generalization:1 alleviate:1 brian:2 elementary:1 yij:3 hold:2 considered:1 lawrence:1 mapping:5 matthew:1 achieves:4 wine:1 estimation:1 bag:1 label:2 utexas:1 correctness:1 unfolding:2 fukumizu:1 uller:1 clearly:2 minimization:1 gaussian:6 always:1 aim:1 varying:1 conjunction:1 corollary:1 encode:1 focus:2 joachim:1 vk:5 ponce:1 rank:31 contrast:3 seeger:1 baseline:10 sense:1 transductively:1 dependent:1 membership:1 nn:6 el:1 lj:1 typically:1 kernelized:4 interested:1 issue:2 classification:12 among:1 ill:1 pascal:1 spatial:1 special:9 initialize:1 uc:1 construct:1 once:1 ng:2 kw:48 broad:2 represents:2 unsupervised:4 nearly:1 representer:1 icml:7 others:1 yoshua:1 quantitatively:1 bangalore:2 few:2 belkin:1 randomly:1 simultaneously:1 divergence:3 preserve:1 kandola:2 microsoft:2 n1:1 psd:2 frob:14 evaluation:1 alignment:2 analyzed:1 kvk:1 semidefinite:2 behind:1 regularizers:5 accurate:1 bregman:1 closer:1 respective:1 incomplete:1 euclidean:1 taylor:1 desired:1 re:1 sacrificing:1 tsuda:1 instance:1 modeling:2 earlier:1 cost:1 introducing:1 entry:1 subset:1 recognizing:1 too:1 itml:23 optimally:2 eec:2 recht:1 borgwardt:1 kijsirikul:1 invertible:2 pool:1 concrete:1 squared:1 again:1 satisfied:2 containing:2 soybean:1 collapsing:2 return:2 account:1 parrilo:1 de:1 includes:2 coefficient:1 satisfy:2 explicitly:5 ranking:1 depends:1 idealized:1 later:1 closed:2 analyze:2 competitive:1 relied:1 xing:2 option:1 candes:1 defer:1 minimize:1 accuracy:16 variance:2 largely:1 efficiently:2 correspond:1 yield:2 generalize:3 handwritten:1 vincent:1 critically:1 classified:1 sebastian:1 definition:5 inexact:1 against:2 naturally:3 proof:5 recovers:3 xn1:3 propagated:1 sampled:1 proved:1 dataset:9 intrinsically:1 color:2 ut:1 dimensionality:36 appears:1 supervised:17 follow:1 specify:2 maximally:1 wei:1 april:1 formulation:13 though:1 furthermore:9 just:2 smola:3 until:1 hand:1 replacing:1 lack:2 grows:1 semisupervised:1 usa:2 contain:1 ccf:1 inductive:10 regularization:12 hence:2 dhillon:4 tn1:1 mahalanobis:12 during:1 davis:1 percentile:1 iris:2 prominent:1 theoretic:2 demonstrate:3 performs:1 image:5 wise:2 lazebnik:1 novel:3 recently:2 charles:1 common:2 specialized:1 empirically:1 volume:1 million:2 discussed:2 extend:1 significant:2 measurement:1 rd:2 unconstrained:1 similarly:2 shawe:1 supervision:1 similarity:3 etc:1 add:1 recent:1 showed:1 optimizing:1 reverse:1 scenario:2 manipulation:2 caltech:2 seen:2 preserving:1 additional:2 minimum:5 relaxed:1 employed:1 semi:14 xn2:3 multiple:2 desirable:1 reduces:7 gretton:1 constructive:2 match:1 cross:3 bach:2 dept:1 divided:1 post:6 permitting:1 coded:2 plugging:1 parenthesis:1 jean:1 variant:2 scalable:2 breast:1 vision:1 metric:29 aty:1 iteration:5 kernel:151 represent:1 pyramid:2 achieved:4 addition:1 whereas:1 grow:1 singular:2 sch:1 rest:1 unlike:1 posse:1 eigenfunctions:1 subject:3 incorporates:1 lafferty:1 effectiveness:1 jordan:4 call:1 bengio:1 embeddings:2 enough:1 variety:6 xj:1 affect:1 iterate:1 zi:1 restrict:1 suboptimal:2 click:1 reduce:4 idea:1 simplifies:1 andreas:1 det:5 expression:2 pca:8 bartlett:1 song:2 f:6 peter:1 algebraic:1 logdet:5 useful:1 generally:1 clear:2 eigenvectors:3 transforms:1 nonparametric:1 extensively:1 category:8 reduced:3 nsf:1 arising:1 per:4 shental:1 paiement:1 kci:2 four:1 achieving:1 marie:1 graph:1 relaxation:2 sum:1 run:2 parameterized:1 mcml:1 throughout:2 almost:1 reasonable:1 comparable:2 ki:3 guaranteed:1 fold:1 constraint:35 constrain:1 x2:1 scene:1 min:10 according:3 hertz:1 smaller:5 restricted:3 sij:1 rca:1 ghaoui:1 thorsten:1 taken:1 heart:3 computationally:2 equation:1 previously:1 slack:1 describing:1 discus:3 turn:1 argmaxj:1 mind:1 singer:1 ouimet:1 prajain:1 end:2 acronym:1 sustik:1 available:2 permit:1 apply:3 kwok:1 hierarchical:1 appropriate:4 spectral:18 enforce:1 neighbourhood:1 kernelizing:2 schmidt:1 yair:1 weinberger:1 batch:1 top:3 clustering:3 include:6 remaining:1 maintaining:1 eon:1 ghahramani:1 micchelli:1 objective:4 malik:1 parametric:4 gradient:1 distance:13 separate:1 mapped:4 link:1 topic:1 manifold:5 urtasun:1 induction:1 kk:1 minimizing:2 balance:1 equivalently:3 unfortunately:1 trace:19 constructively:1 unknown:2 perform:1 pmk:1 observation:1 datasets:4 vkt:2 benchmark:1 regularizes:1 defining:1 incorporate:1 hinton:1 rn:4 perturbation:3 pair:6 specified:1 connection:2 learned:38 quadratically:3 established:2 nip:11 able:4 suggested:1 beyond:4 below:2 pattern:1 bar:1 including:3 memory:1 max:3 power:1 natural:3 regularized:4 indicator:1 zhu:1 improve:1 lk:1 ready:1 xiaojin:1 schmid:1 kj:1 geometric:2 acknowledgement:1 relative:3 xiang:1 ksk:1 interesting:1 validation:3 thresholding:1 editor:1 nowozin:1 austin:2 cancer:1 caltech101:1 repeat:1 last:1 supported:1 dis:1 side:12 exponentiated:1 india:1 fall:2 neighbor:3 taking:2 saul:2 template:1 uzawa:1 dimension:2 xn:1 gram:1 avoids:1 computes:1 qualitatively:2 schultz:1 emphasize:1 implicitly:3 keep:1 global:5 instantiation:5 assumed:1 xi:3 shwartz:1 discriminative:1 search:1 latent:2 iterative:1 triplet:1 why:1 table:2 additionally:3 learn:14 ca:1 nicolas:1 sra:2 improving:1 williamson:1 bottou:1 domain:2 aistats:1 main:3 whole:1 n2:2 allowed:1 x1:1 elaborate:1 definiteness:1 jmlr:6 learns:3 bij:2 theorem:15 embed:1 specific:3 showing:1 kvi:1 sensing:1 dk:8 ionosphere:1 exists:1 intractable:1 mnist:1 ci:7 dissimilarity:1 margin:2 nk:2 lt:10 infinitely:1 expressed:2 inderjit:3 scalar:3 sindhwani:1 applies:1 corresponds:2 satisfies:2 conditional:1 goal:3 viewed:2 formulated:2 replace:1 pola:4 change:1 infinite:2 except:1 reducing:3 hyperkernels:3 svd:1 experimental:1 atsch:2 select:2 formally:1 berg:1 cholesky:1 latter:1 dissimilar:2 ongoing:1 spectf:1 evaluate:3 kernelization:13 argyriou:1 |
3,489 | 416 | ALCOVE: A Connectionist Model of
Human Category Learning
John K. Kruschke
Department of Psychology and Cognitive Science Program
Indiana University, Bloomington IN 47405-4201 USA
e-mail: [email protected]
Abstract
ALCOVE is a connectionist model of human category learning that fits a
broad spectrum of human learning data. Its architecture is based on wellestablished psychological theory, and is related to networks using radial
basis functions. From the perspective of cognitive psychology, ALCOVE can
be construed as a combination of exemplar-based representation and errordriven learning. From the perspective of connectionism, it can be seen as
incorporating constraints into back-propagation networks appropriate for
modelling human learning.
1
INTRODUCTION
ALCOVE is intended to accurately model human, perhaps non-optimal, performance
in category learning. While it is a feed-forward network that learns by gradient
descent on error, it is unlike standard back propagation (Rumelhart, Hinton &
'''illiams, 1986) in its architecture, its behavior, and its goals. Unlike the standard
back-propagation network, which was motivated by generalizing neuron-like perceptrons, the architecture of ALCOVE was motivated by a molar-level psychological
theory, Nosofsky's (1986) generalized context model (GCM). The psychologically
constrained architecture results in behavior that captures the detailed course of human category learning in many situations where standard back propagation fares
less well. And, unlike most applications of standard back propagation, the goal of
ALCOVE is not to discover new (hidden-layer) representations after lengthy training,
but rather to model the course of learning itself (Kruschke, 1990c), by determining
which dimensions of the given representation are most relevant to the task, and how
strongly to associate exemplars with categories.
649
650
Kruschke
Category nodes.
Learned association weights.
Exemplar nodes.
Learned attention strengths.
o
0
Stimulus dimension nodes.
Figure 1: The architecture of ALCOVE (Attention Learning covEring map). Exemplar nodes show their activation profile when r q 1 in Eqn. 1.
= =
2
THE MODEL
Like the GCM, ALCOVE assumes that input patterns can be represented as points in a.
multi-dimensional psychological space, as determined by multi-dimensiona.l scaling
algorithms (e.g., Shepard, 1962). Each input node encodes a single psychological
dimension, with the activation of the node indicating the value of the stimulus on
that dimension. Figure 1 shows the architecture of ALCOVE, illustrating the case of
just two input dimensions.
Each input node is gated by a dimensional attention strength ai. The attention
strength on a dimension reflects the relevance of that dimension for the particular
categorization task at hand, and the model learns to allocate more attention to
relevant dimensions and less to irrelevant dimensions.
Each hidden node corresponds to a position in the multi-dimensional stimulus space,
with one hidden node placed at the position of every training exemplar. Each hidden
node is activated according to the psychological similarity of the stimulus to the
exemplar represented by the hidden node. The similarity function comes from the
GCM and the work of Shepard (1962; 1987): Let the position of the ph hidden
node be denoted as (hjl' hj2' ... ), and let the activation of the ph hidden node be
denoted as ajid. Then
(1)
where c is a positive constant called the specificity of the node, where the sum is
taken over all input dimensions, and where rand q are constants determining the
similarity metric and similarity gradient, respectively. For separable psychologica.l
ALCOVE: A Connectionist Model of Human Category Learning
(a)
(b)
:*?..... .
__. .J. .
t
.&.
~
~
?~??"';t.....?.....
bL
?..?
Figure 2: (a) Increasing attention on the horizontal axis and decreasing attention on
the vertical axis causes exemplars of the two categories (denoted by dots and + 's) to
have greater between-category dissimilarity and greater within-category similarity.
(After Nosofsky, 1986, Fig. 2.) (b) ALCOVE cannot differentially attend to diagonal
axes.
dimensions, the city-block metric (1' = 1) is used, while integra.l dimensions might
call for a Euclidean metric (r = 2). An exponential similarity gradient (q = 1) is
used here (Shepard, 1987; this volume), but a Gaussian similarity gradient (q = 2)
can sometimes be appropriate.
The dimensional attention strengths adjust themselves so that exemplars from different categories become less similar, and exemplars within categories become more
similar. Consider a simple case of four stimuli that form the corners of a square in
input space, as in Figure 2(a). The two left stimuli are mapped to one category
(indicated by dots) and the two right stimuli are mapped to another category (indicated by +'s). ALCOVE learns to increase the attention strength on the horizontal
axis, and to decrease the attention strength on the vertical axis. On the other hand,
ALCOVE cannot stretch or shrink diagonally, as suggested in Figure 2(b). This constraint is an accurate reflection of human performance, in that categories separated
by a diagonal boundary tend to take longer to learn than categories separa.ted by a
boundary orthogonal to one dimension.
Each hidden node is connected to output nodes that correspond to response categories. The connection from the lh hidden node to the kth category node hac; a
connection weight denoted Wkj' called the association weight between the exemplar
and the category. The output (category) nodes are activated by the linear rule used
in the GCM and the network models of Gluck and Bower (1988a,b):
a out
k
- 'L..J
" W kj ajhid
(2)
.
hid
j
In ALCOVE, unlike the GCM, the association weights are learned and can take on any
real value, including negative values. Category activations are mapped to response
probabilities using the same choice rule as was used in the GCM and network models .
Thus,
Pr(I<) = exp( ? alt) /
L exp( ? ak
out
k
ut )
(3)
651
652
Kruschke
where <p is a real-valued scaling constant. In other words, the probability of classifying the given stimulus into category K is determined by the magnitude of category
K's activation relative to the sum of all category activations.
The dimensional attention strengths, Ck'i' and the association weights, w kj ' are
learned by gradient descent on sum-squared error, as used in standard back propagation (Rumelhart et al., 1986) and in the network models of Gluck and Bower
(1988a,b). Details can be found in Kruschke (1990a,b). In fitting ALCOVE to human
learning data, there are four free parameters: the fixed specificity c in Equation 1;
the probability mapping constant <p in Equation 3; the association weight learning
rate; and, the attention strength learning rate.
In summary, ALCOVE extends Nosofsky's (1986) GCM by having a learning mechanism and by allowing any positive or negative values for association weights, and
it extends Gluck and Bower's (1988a,b) network models by including explict attention strengths and by using continuous input dimensions. It is a combination of
exemplar-based category representations with error-driven learning, as alluded to
by Estes et al. (1989; see also Hurwitz, 1990). ALCOVE can also be construed as a
form of (non-)radial basis function network, if r = q = 2 in Equation 1. In the form
described here, the hidden nodes are placed at positions where training exemplars
occur, but another option, described by Kruschke (1990a,b), is to scatter hidden
nodes over the input space to form a covering map. Both these methods work
well in fitting human data in some situations, but the exemplar-based a.pproach
has advantages (Kruschke, 1990a,b). ALCOVE can also be compared to a standard
back-propagation network that has adaptive attentional multipliers on its input
nodes (cf. Mozer and Smolensky, 1989), but with fixed input-to-hidden weights
(Kruschke 1990b, p.33). Such a network behaves similarly to a covering-map version of ALCOVE. Moreover, such back-prop networks are susceptible to catastrophic
retroactive interference (Ratcliff, 1990; McCloskey & Cohen, 1989), unlike ALCOVE.
3
APPLICATIONS
Several applications of ALCOVE to modelling human performance are detailed elsewhere (Kruschke, 1990a,b); a few will be summarized here.
3.1
RELATIVE DIFFICULTY OF CATEGORY STRUCTURES
The classic work of Shepard, Hovland and Jenkins (1961) explored the relative
difficulty of learning different category structures. As a simplified example, the
linearly separable categories in Figure 2( a) are easier to learn than the exclusive-or
problem (which would have the top-left and bottom-right exemplars mapped to
one category, and the top-right and bottom-left mapped to the other). Shepard et
al. carefully considered several candidate explanations for the varying difficulties,
and concluded that some form of attentionallearning was necessary to a.ccount for
their results. That is, people seemed to be able to determine which dimensions
were relevant or irrelevant, and they allocated attention to dimensions a.ccordingly.
Category structures with fewer relevant dimensions were easier to learn. ALCOVE
has just the sort of attentional learning mechanism called for, and can match the
relative difficulties observed by Shepard et al.
ALCOVE: A Connectionist Model of Human Category Learning
3.2
BASE-RATE NEGLECT
A recent series of experiments (Gluck & Bower, 1988b; Estes et aI., 1989; Shanks,
1990; Nosofsky et aI., 1991) investigated category learning when the assignment of
exemplars to categories was probabilistic and the base rates of the categories were
unequal. In these experiments, there were two categories (one "rare" and the other
"common") and four binary-valued stimulus dimensions. The stimulus values were
denoted sl and sl * for the first dimension, s2 and s2* for the second dimension,
and so on. The probalities were arranged such that over the course of training, the
normative probability of each category, given sl alone, was 50%. However, when
presented with feature sl alone, human subjects classified it as the rare category
significantly more than 50% of the time. It was as if people were neglecting the
base rates of the categories.
Gluck and Bower (1988b) and Estes et aI. (1989) compared two candidate models to
account for the apparent base-rate neglect. One was a simple exemplar-based model
that kept track of each training exemplar, and made predictions of categoriza tions
by summing up frequencies of occurence of each stimulus value for each category.
The exemplar-based model was unable to predict base-rate neglect. The second
model they considered, the "double-node network," was a one-layer error-driven
network that encoded each binary-valued dimension with a pair of input nodes.
The double-node model was able show base-rate neglect.
ALCOVE is an exemplar-based model, and so it is challenged by those results. In
fact, Kruschke (1990a,b) and Nosofsky et aI. (1991) show that ALCOVE fits the trailby-trial learning and base-rate neglect data as well as or better than the double-node
model.
3.3
THREE-STAGE LEARNING OF RULES AND EXCEPTIONS
One of the best-known connectionist models of human learning is Rumelhart and
McClelland's (1986) model of verb past tense acquistion. One of the main phenomena they wished to model was three-stage learning of irregular verbs: First a few
high-frequency irregulars are learned; second, many regular verbs are learned with
some interference to the previously learned irregulars; and third, the high-frequency
irregulars are re-Iearned. l In order to reproduce three-stage learning in their model,
Rumelhart and McClelland had to change the training corpus during learning, so
that early on the network was trained with ten verbs, 80% of which were irregular,
and later the network was trained with 420 verbs, only 20% of which were irregular.
It remains a challenge to connectionist models to show three-stage learning of rules
and exceptions while keeping the training set constant.
While ALCOVE has not been applied to the verb-learning situation (and perhaps
should not be, as a multi-dimensional similarity-space might not be a tractable
representation for verbs), it can show three-stage learning of rules and exceptions
in simpler but analogous situations. Figure 3 shows an arrangement of training
exemplars, most of which can be classified by the simple rule, "if it's to the right
lThere is evidence that three-stage learning is only very subtle in verb past tense
acquisition (e.g., Marcus, 1990), but whether it exists more robustly in the simpler cat.egory
learning domains addressed by ALCOVE is still an open question.
653
654
Kruschke
0.9
@
@
@ @
0 @
G
0
~
I
@:
I
~
GJ
8
[3
~
[!J
8 [!)
...u
,.....
~u
0.1
0.7
'-"
Q.c
0.6
O.S
learning trial
Figure 3: Left panel shows arrangement of rule-following (R) and exceptional (E)
cases. Right panel shows the performance of ALCOVE. The ratio of E to R cases
and all parameters of the model were fixed throughout training.
of the dashed line, then it's in the 'rectangle' category, otherwise it's in the 'oval'
category." The rule-following cases are marked with an "R." There are two exceptional cases near the dashed line, marked with an "E." Exceptional exemplars
occurred 4 times as often as rule-following exemplars. The right panel of Figure 3
shows that ALCOVE initially learns the E cases better than the R cases, but that
later in learning the R cases surpass the E's. The reason is that early in learning,
ALCOVE is primarily building up association weights and has not yet shifted much
attention away from the irrelevant dimension. Associations from the E cases grow
more quickly because they are more frequent. Once the associations are established,
then there is a basis for attention to be shifted away fWlll the irrelevant dimension,
rapidly improving performance on the R cases. At the time of this writing, these
results have the status of a provocative demonstration, but experiments with human
subjects in similar learning situations are presently being undertaken.
Acknow ledgment
This research was supported in part by Biomedical Research Support Grant RR
7031-25 from the National Institutes of Health.
References
Estes, Vv. K., Campbell, J. A., Hatsopoulos, N., & Hurwitz, J. B. (1989). Base-rate
effects in category learning: A comparison of parallel network and memory storageretrieval models. J. Exp. Psych. Learning, Memory and Cognition, 15, 556-576.
Gluck, M. A. & Bower, G. H. (1988a). Evaluating an adaptive network model of
human learning. J. of Memory and Language, 27, 166-195.
Gluck, M. A. & Bower, G. H. (1988b). From conditioning to category learning: An
adaptive network model. J. Exp. Psych. General, 117, 227-247.
Hurwitz, J. B. (1990). A hidden-pattern unit network model of category learning.
Doctoral dissertation, Harvard University.
ALCOVE: A Connectionist Model of Human Category Learning
Kruschke, J. K. (1990a). A connectionist model of category learning. Doctoral dissertation, University of California at Berkeley. Available from University Microfilms
International.
Kruschke, J. K. (1990b). ALCOVE: A connectionist model of category learning.
Research Report 19, Cognitive Science Program, Indiana University.
Kruschke, J. K. (1990c). How connectionist models learn: The course of learning
in connectionist networks. Behavioral and Brain Sciences, 13, 498-499.
Marcus, G. F., Ullman, M., Pinker, S., Hollander, M., Rosen, T. J., & Xu, F. (1990).
Overregularization. Occasional Paper #41, MIT Center for Cognitive Science.
McCloskey, M. & Cohen, N. J. (1989). Catastrophic interference in connectionist
networks: the sequential learning problem. In: G. Bower (ed.), The Psychology of
Learning and ~Motivation, Vol. 24. New York: Academic Press.
Mozer, M. C., & Smolensky, P. (1989). Skeletonization: A technique for trimming
the fat from a network via relevance assessment. In: D. S. Touretzky (ed.), Advances
in Neural Information Processing Systems, I, pp. 107-115. San Mateo, CA: Morgan
Kaufmann.
Nosofsky, R. M. (1986). Attention, similarity and the identification-categorization
relationship. J. Exp. Psych. General, 115, 39-57.
Nosofsky, R. M., Kruschke, J. K., & McKinley, S. (1991). Comparisons between
adaptive network and exemplar models of classification learning. Research Report
35, Cognitive Science Program, Indiana University.
Ratcliff, R. (1990). Connectionist models of recognition memory: Constraints imposed by learning and forgett.ing functions. Psychological Review, 97, 285-308.
Rumelhart, D. E., Hinton, G. E., & ''''illiams, R. J. (1986). Learning internal
representations by back-propagating errors. In: D. E. Rumelhart & .J. L. McClelland
(eds.), Parallel Distributed Processing, Vol. 1, pp. 318-362. Cambridge, ~'lA: MIT
Press.
Rumelhart, D. E., & McClelland, J. L. (1986). On learning the past tenses of
english verbs. In: J. L. McClelland & D. E. Rumelhart (eds.), Parallel Distributed
Processing, Vol. 2, pp. 216-271. Cambridge, MA: MIT Press.
Shanks, D. R. (1990). Connectionism and the learning of probabilistic concepts .
Quarterly J. Exp. Psych., 42A, 209-237.
Shepard, R. N. (1962). The analysis of proximities: Multidimensional scaling with
an unknown distance function, I & II. Psychometrika, 27, 125-140, 219-246.
Shepard, R. N. (1987). Toward a universal law of generalization for psychological
science. Science, 237, 1317-1323.
Shepard, R. N., Hovland, C. 1., & Jenkins, H. M. (1961) . Learning and memoriza.tion of classifications. Psychological Monographs, 75(13), Whole No. 517.
655
| 416 |@word trial:2 illustrating:1 version:1 open:1 series:1 past:3 activation:6 scatter:1 yet:1 john:1 alone:2 fewer:1 dissertation:2 node:26 simpler:2 become:2 fitting:2 behavioral:1 behavior:2 themselves:1 multi:4 brain:1 decreasing:1 increasing:1 psychometrika:1 discover:1 moreover:1 panel:3 psych:4 indiana:4 berkeley:1 every:1 multidimensional:1 fat:1 unit:1 grant:1 positive:2 attend:1 ak:1 might:2 doctoral:2 mateo:1 block:1 universal:1 significantly:1 word:1 radial:2 regular:1 specificity:2 cannot:2 egory:1 context:1 writing:1 map:3 imposed:1 center:1 attention:17 kruschke:16 rule:9 classic:1 molar:1 analogous:1 iearned:1 associate:1 harvard:1 rumelhart:8 recognition:1 hj2:1 bottom:2 observed:1 capture:1 connected:1 decrease:1 hatsopoulos:1 monograph:1 mozer:2 trained:2 hjl:1 basis:3 represented:2 cat:1 separated:1 apparent:1 encoded:1 hollander:1 valued:3 otherwise:1 itself:1 advantage:1 rr:1 provocative:1 frequent:1 hid:1 relevant:4 rapidly:1 differentially:1 double:3 categorization:2 retroactive:1 tions:1 propagating:1 exemplar:23 wished:1 come:1 human:17 ucs:1 generalization:1 connectionism:2 stretch:1 proximity:1 considered:2 exp:6 mapping:1 predict:1 cognition:1 early:2 hovland:2 exceptional:3 alcove:32 city:1 reflects:1 mit:3 gaussian:1 rather:1 ck:1 varying:1 ax:1 separa:1 modelling:2 ratcliff:2 initially:1 hidden:13 reproduce:1 classification:2 denoted:5 constrained:1 once:1 having:1 ted:1 broad:1 rosen:1 connectionist:13 stimulus:11 report:2 few:2 primarily:1 national:1 intended:1 trimming:1 adjust:1 activated:2 accurate:1 neglecting:1 necessary:1 lh:1 orthogonal:1 euclidean:1 re:1 acquistion:1 psychological:8 challenged:1 assignment:1 hurwitz:3 rare:2 international:1 probabilistic:2 quickly:1 nosofsky:7 squared:1 cognitive:5 corner:1 ullman:1 account:1 summarized:1 later:2 tion:1 pinker:1 sort:1 option:1 parallel:3 construed:2 square:1 kaufmann:1 correspond:1 identification:1 accurately:1 classified:2 touretzky:1 ed:4 lengthy:1 acquisition:1 frequency:3 pp:3 bloomington:1 ut:1 subtle:1 carefully:1 back:9 campbell:1 feed:1 response:2 rand:1 arranged:1 shrink:1 strongly:1 just:2 stage:6 biomedical:1 hand:2 eqn:1 horizontal:2 gcm:7 assessment:1 propagation:7 indicated:2 perhaps:2 building:1 effect:1 usa:1 tense:3 multiplier:1 concept:1 during:1 covering:3 generalized:1 reflection:1 common:1 behaves:1 cohen:2 conditioning:1 shepard:9 volume:1 association:9 fare:1 occurred:1 cambridge:2 ai:5 similarly:1 language:1 had:1 dot:2 similarity:9 longer:1 gj:1 base:8 recent:1 perspective:2 irrelevant:4 driven:2 pproach:1 binary:2 seen:1 morgan:1 greater:2 determine:1 integra:1 dashed:2 ii:1 ing:1 match:1 academic:1 prediction:1 metric:3 psychologically:1 sometimes:1 irregular:6 addressed:1 wkj:1 grow:1 concluded:1 allocated:1 hac:1 unlike:5 subject:2 tend:1 call:1 near:1 fit:2 psychology:3 architecture:6 whether:1 motivated:2 allocate:1 york:1 cause:1 detailed:2 ten:1 ph:2 wellestablished:1 category:47 mcclelland:5 sl:4 shifted:2 track:1 vol:3 four:3 explict:1 undertaken:1 kept:1 rectangle:1 sum:3 extends:2 throughout:1 scaling:3 layer:2 shank:2 strength:9 occur:1 constraint:3 encodes:1 separable:2 department:1 according:1 combination:2 presently:1 illiams:2 pr:1 interference:3 taken:1 equation:3 alluded:1 previously:1 remains:1 mechanism:2 tractable:1 available:1 jenkins:2 quarterly:1 occasional:1 away:2 appropriate:2 robustly:1 skeletonization:1 assumes:1 top:2 estes:4 cf:1 neglect:5 bl:1 arrangement:2 question:1 exclusive:1 diagonal:2 gradient:5 kth:1 distance:1 attentional:2 mapped:5 unable:1 mail:1 reason:1 toward:1 marcus:2 relationship:1 ratio:1 demonstration:1 susceptible:1 acknow:1 negative:2 unknown:1 gated:1 allowing:1 vertical:2 neuron:1 descent:2 situation:5 hinton:2 verb:9 pair:1 connection:2 california:1 unequal:1 learned:7 established:1 able:2 suggested:1 pattern:2 smolensky:2 challenge:1 program:3 including:2 memory:4 explanation:1 difficulty:4 axis:4 health:1 occurence:1 kj:2 review:1 determining:2 relative:4 law:1 classifying:1 course:4 summary:1 diagonally:1 placed:2 elsewhere:1 free:1 keeping:1 supported:1 english:1 vv:1 institute:1 distributed:2 boundary:2 dimension:23 evaluating:1 seemed:1 forward:1 made:1 adaptive:4 san:1 simplified:1 status:1 summing:1 corpus:1 spectrum:1 continuous:1 learn:4 ca:1 improving:1 investigated:1 mckinley:1 domain:1 main:1 linearly:1 s2:2 motivation:1 whole:1 profile:1 xu:1 fig:1 position:4 exponential:1 psychologica:1 candidate:2 bower:8 third:1 learns:4 normative:1 explored:1 alt:1 evidence:1 incorporating:1 exists:1 sequential:1 dissimilarity:1 magnitude:1 easier:2 gluck:7 generalizing:1 mccloskey:2 corresponds:1 ma:1 prop:1 goal:2 marked:2 change:1 determined:2 surpass:1 called:3 oval:1 catastrophic:2 la:1 perceptrons:1 indicating:1 exception:3 internal:1 people:2 support:1 relevance:2 phenomenon:1 |
3,490 | 4,160 | Bootstrapping Apprenticeship Learning
Abdeslam Boularias
Department of Empirical Inference
Max-Planck Institute for Biological Cybernetics
72076 T?ubingen, Germany
[email protected]
Brahim Chaib-Draa
Department of Computer Science
Laval University
Quebec G1V 0A6, Canada
[email protected]
Abstract
We consider the problem of apprenticeship learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is maximizing a utility function
that is a linear combination of state-action features. Most IRL algorithms use a
simple Monte Carlo estimation to approximate the expected feature counts under
the expert?s policy. In this paper, we show that the quality of the learned policies
is highly sensitive to the error in estimating the feature counts. To reduce this
error, we introduce a novel approach for bootstrapping the demonstration by assuming that: (i), the expert is (near-)optimal, and (ii), the dynamics of the system
is known. Empirical results on gridworlds and car racing problems show that our
approach is able to learn good policies from a small number of demonstrations.
1
Introduction
Modern robots are designed to perform complicated planning and control tasks, such as manipulating objects, navigating in outdoor environments, and driving in urban settings. Unfortunately, manually programming these tasks is almost infeasible in practice due to their high number of states.
Markov Decision Processes (MDPs) provide an efficient tool for handling such tasks with a little
help from an expert. The expert?s help consists in simply specifying a reward function. However, in
many practical problems, even specifying a reward function is not easy. In fact, it is often easier to
demonstrate examples of a desired behavior than to define a reward function (Ng & Russell, 2000).
Learning policies from demonstration, a.k.a. apprenticeship learning, is a technique that has been
widely used in robotics. An efficient approach to apprenticeship learning, known as Inverse Reinforcement Learning (IRL) (Ng & Russell, 2000; Abbeel & Ng, 2004), consists in recovering a
reward function under which the policy demonstrated by an expert is near-optimal, rather than directly mimicking the expert?s actions. The learned reward is then used for finding an optimal policy.
Consequently, the expert?s actions can be predicted in states that have not been encountered during
the demonstration. Unfortunately, as already pointed by Abbeel & Ng (2004), recovering a reward
function is an ill-posed problem. In fact, the expert?s policy can be optimal under an infinite number
of reward functions. Most of the work on apprenticeship learning via IRL focused on solving this
particular problem by using different types of regularization and loss cost functions (Ratliff et al.,
2006; Ramachandran & Amir, 2007; Syed & Schapire, 2008; Syed et al., 2008).
In this paper, we focus on another important problem occurring in IRL. IRL-based algorithms rely on
the assumption that the reward function is a linear combination of state-action features. Therefore,
the value function of any policy is a linear combination of the expected discounted frequency (count)
of encountering each state-action feature. In particular, the value function of the expert?s policy is
approximated by a linear combination of the empirical averages of the features, estimated from
the demonstration (the trajectories). In practice, this method works efficiently only if the number
1
of examples is sufficiently large to cover all the states, or the dynamics of the system is nearly
deterministic. For the tasks related to systems with a stochastic dynamics and a limited number of
available examples, we propose an alternative method for approximating the expected frequencies
of the features under the expert?s policy. Our approach takes advantage of the fact that the expert?s
partially demonstrated policy is near-optimal, and generalizes the expert?s policy beyond the states
that appeared in the demonstration. We show that this technique can be efficiently used to improve
the performance of two known IRL algorithms, namely Maximum Margin Planning (MMP) (Ratliff
et al., 2006), and Linear Programming Apprenticeship Learning (LPAL) (Syed et al., 2008).
2
Preliminaries
Formally, a finite-state Markov Decision Process (MDP) is a tuple (S, A, {T a }, R, ?, ?), where: S
is a set of states, A is a set of actions, T a is a transition matrix defined as ?s, s0 ? S, a ? A :
T a (s, s0 ) = P r(st+1 = s0 |st = s, at = a), R is a reward function (R(s, a) is the reward associated with the execution of action a in state s), ? is the initial state distribution, and ? is a discount
factor. We denote by MDP\R a Markov Decision Process without a reward function, i.e. a tuple
(S, A, {T a }, ?, ?). We assume that the reward function R is given by a linear combination of k
Pk
feature vectors fi with weights wi : ?s ? S, ?a ? A : R(s, a) = i=0 wi fi (s, a). A deterministic
policy ? is a function that returns an action ?(s) for each state s. A stochastic policy ? is a probability distribution on the action to be executed in each state, defined as ?(s, a) = P r(at = a|st = s).
The value V (?) of a policy ? isPthe expected sum of rewards that will be received if policy ? will
?
be followed, i.e. V (?) = E[ t=0 ? t R(st , at )|?, ?, T ]. An optimal policy ? is one satisfying
? = arg max? V (?). The occupancy
P??? tof a policy ? is the discounted state-action visit distribution, defined as: ?? (s, a)
P = E[ t=0 ? ?st ,s ?at ,a |?, ?, T ] where ? is the Kronecker delta. We
also use ?? (s) to denote a ?? (s, a). The following linear constraints, known as Bellman-flow
constraints, are necessary and sufficient for defining an occupancy measure of a policy:
XX
X
{ ?? (s) = ?(s) + ?
?? (s0 , a)T a (s0 , s) ,
?? (s, a) = ?? (s) , ?? (s, a) > 0 } (1)
s0 ?S a?A
a?A
A policy ? is well-defined by its occupancy measure ?? , one can interchangeably use ? and ??
to denote a policy. The set of feasible occupancy measures is denoted by G. The frequency of a
feature fi for a policy ? is given by vi,? = F (i, .)?? , where F is a k by |S||A| feature matrix, such
that F (i, (s, a)) = fi (s, a). Using this definition, the value of a policy ? can be written as a linear
function of the frequencies: V (?) = wT F ?? = wT v? , where v? is the vector of vi,? . Therefore,
the value of a policy is completely determined by the frequencies (or counts) of the features fi .
3
3.1
Apprenticeship Learning
Overview
The aim of apprenticeship learning is to find a policy ? that is at least as good as a policy ? E
demonstrated by an expert, i.e. V (?) > V (? E ). The value functions of ? and ? E cannot be
directly compared, unless a reward function is provided. To solve this problem, Ng & Russell
(2000) proposed to first learn a reward function, assuming that the expert is optimal, and then use it
to recover the expert?s complete policy. However, the problem of learning a reward function given an
optimal policy is ill-posed (Abbeel & Ng, 2004). In fact, a large class of reward functions, including
all constant functions for instance, may lead to the same optimal policy. To overcome this problem,
Abbeel & Ng (2004) did not consider recovering a reward function, instead, their algorithm returns
a policy ? with a bounded loss in the value function, i.e. k V (?) ? V (? E ) k6 , where the value
is calculated by using the worst-case reward function. This property is derived from the fact that
when the frequencies of the features under two policies match, the cumulative rewards of the two
policies match as well, assuming that the reward is a linear function of these features. In the next two
subsections, we briefly describe two algorithms for apprenticeship learning via IRL. The first one,
known as Maximum Margin Planning (MMP) (Ratliff et al., 2006), is a robust algorithm based on
learning a reward function under which the expert?s demonstrated actions are optimal. The second
one, known as Linear Programming Apprenticeship Learning (LPAL) Syed et al. (2008), is a fast
algorithm that directly returns a policy with a bounded loss in the value.
2
3.2
Maximum Margin Planning
Maximum Margin Planning (MMP) returns a vector of reward weights w, such that the value of the
expert?s policy wT F ??E is higher than the value of an alternative policy wT F ?? by a margin that
scales with the number of expert?s actions that are different from the actions of the alternative policy.
This criterion is explicitly specified in the cost function minimized by the algorithm:
q ?
2
cq (w) = max(wT F + l)? ? wT F ??E + k wk
(2)
??G
2
where q ? {1, 2} defines the slack penalization, ? is a regularization parameter, and l is a deviation
cost vector, that can be defined as: l(s, a) = 1 ? ? E (s, a). A policy maximizing the cost-augmented
reward vector (wT F + l) is almost completely different from ? E , since an additional reward l(s, a)
is given for the actions that are different from those of the expert. This algorithm minimizes the
difference between the value divergence wT F ??E ? wT F ? and the policy divergence l?.
The cost function cq is convex, but nondifferentiable. Ratliff et al. (2006) showed that cq can be
q
minimized by using a subgradient method. For a given reward w, a subgradient gw
is given by:
q?1
q
gw
= q (wT F + l)?+ ? wT F ??E
F ?w ??E + ?w
(3)
where ?+ = arg max??G (wT F + l)?, and ?w ??E = ?+ ? ??E .
3.3
Linear Programming Apprenticeship Learning
Linear Programming Apprenticeship Learning (LPAL) is based on the following observation: if the
reward weights are positive and sum to 1, then V (?) > V (? E ) + mini [vi,? ? vi,?E ], for any policy
?. LPAL consists in finding a policy that maximizes the margin mini [vi,? ? vi,?E ]. The maximal
margin is found by solving the following linear program:
max v
v,??
subject to
?i ? {0, . . . , k ? 1} : v 6
XX
?? (s, a)fi (s, a) ?
?? (s) = ?(s) + ?
XX
??E (s, a)fi (s, a)
(4)
s?S a?A
s?S a?A
{z
|
XX
}
vi,?
?? (s0 , a)T (s0 , a, s),
s0 ?S a?A
X
|
{z
}
vi,?E
?? (s, a) = ?? (s), ?? (s, a) > 0
a?A
The last three constraints in this linear program correspond to the Bellman-flow constraints (Equation (1)) defining G, the feasible set of ?? . The learned policy ? is given by:
?? (s, a)
?(s, a) = P
0
a0 ?A ?? (s, a )
3.4
Approximating feature frequencies
def
Notice that both MMP and LPAL require the knowledge of the frequencies vi,??E = F (i, .)??E .
These frequencies can be analytically calculated (using Bellman-flow constraints) only if ? E is comm
m m
pletely specified. Given a sequence of M demonstrated trajectories tm = (sm
1 , a1 , . . . , sH , aH , ),
the frequencies vi,?E are estimated as:
v?i,?E =
M
H
1 XX t
m
? fi (sm
t , at )
M m=1 t=1
(5)
There are nevertheless many problems related to this approximation. First, the estimated frequencies
v?i,?E can be very different from the true ones when the demonstration trajectories are scarce. Second, the frequencies v?i,?E are estimated for a finite horizon H, whereas the frequencies vi,? used in
the objective function (Equations (2) and (4)), are calculated for an infinite horizon (Equation (1)).
In practice, these two values are too different and cannot be compared as done in these cost functions. Finally, the frequencies vi,?E are a function of both a policy and the transition probabilities,
the empirical estimation of vi,?E does not take advantage of the known transition probabilities.
3
4
Reward loss in Maximum Margin Planning
To show the effect of the error in the
V
estimated feature frequencies on the
quality of the learned rewards, we
present an analysis of the distance beVl
tween the vector of reward weights
w
? returned by MMP with estimated
frequencies v??E = F ?
??E , calculated
Vl ? v?E
v? E
from the examples by using Equation (5), and the vector wE returned
Vl ? v??E
v??E
by MMP with accurate frequencies
v?E = F ??E , calculated by using
Equations (1) with the full policy ? E .
We adopt the following notations:
w
?
wE
?v? = v??E ? v?E , ?w = w
? ? wE ,
and Vl (w) = max??G (wT F + l)?, Figure 1: Reward loss in MMP with approximate frequenand we consider q = 1. The fol- cies v??E . We indicate by v?E (resp. v??E ) the linear function
lowing proposition shows how the re- defined by the vector v?E (resp. v??E ).
ward error ?w is related to the frequency error ?v? . Due to the fact
that the cost function of MMP is piecewise defined, one cannot find a closed-form relation between ?w and ?v? . However, we show that for any w
? ? Rk , there is a monotonically decreasing
+
function f such that for any ? R , if k ?v? k2 < f () then k ?w k2 6 .
Proposition 1 Let ? R+ , if ?w ? Rk , such that k w ? w
? k2 = , if the following condition is
verified:
k ?v? k2 <
Vl (w) ? Vl (w)
? + (w
? ? w)T v??E + ?2 (k w k2 ? k w
? k2 )
then k ?w k2 6 .
Proof The condition stated in the proposition implies:
?(k w k2 ? k w
? k2 )
2
?(k w k2 ? k w
? k2 )
T
T
(H?older)
? (w
? ? w) ?v? < Vl (w) ? Vl (w)
? + (w
? ? w) v??E +
2
?
?
? Vl (w)
? ? w
? T v? E ? k w
? k2 < Vl (w) ? wT v?E ? k w k2
2
2
kw
? ? w k2 k ?v? k2 < Vl (w) ? Vl (w)
? + (w
? ? w)T v??E +
In other terms, the point (w
? T v?E ? ?2 k w
? k2 ) is closer to the surface Vl than any other point
?
T
(w v?E ? 2 k w k2 ), where w is a point on the sphere centered around w
? with a radius of .
T
Since the function Vl is convex and (wE v?E ? ?2 k wE k2 ) is by definition the closest point to
the surface Vl , then wE should be inside the ball centered around w
? with a radius of . Therefore,
k wE ? w
? k2 6 and thus k ?w k2 6 .
Consequently, the reward loss k ?w k2 approaches zero as the error of the estimated feature frequencies k ?v? k2 approaches zero. A simpler bound can be easily derived given admissible
heuristics of Vl .
Corollary: Let Vl and Vl be respectively a lower and an upper bound on Vl , then Proposition (1)
holds if Vl (w) ? Vl (w)
? is replaced by Vl (w) ? Vl (w).
?
Figure (1) illustrates the divergence from the optimal reward weight wE when approximate frequencies are used. The error is not a continuous function of ?v? when the cost function is not
regularized, because the vector returned by MMP is always a fringe point. Informally, the error is
proportional to the maximum subgradient of the function Vl ? v?E at the fringe point wE .
4
5
Bootstrapping Maximum Margin Planning
The feature frequency error ?v? can be significantly reduced by using the known transition function for calculating v??E and solving the flow Equations (1), instead of the Monte Carlo estimator
(Equation (5)). However, this cannot be done unless the complete expert?s policy ? E is provided.
Assuming that the expert?s policy ? E is optimal and deterministic, the value wT F ??E in Equation (2) can be replaced by max??G?E wT F ?, the value of the optimal policy, according to the
current reward weight w, that selects the same actions as the expert in all the states that occurred in
the demonstration. The cost function of the bootstrapped Maximum Margin Planning becomes:
q ?
2
cq (w) = max (wT F + l)?1 ? max wT F ?2 + k wk
(6)
?1 ?G
?2 ?G?E
2
where G?E is the set of vectors ?? , subject to the following modified Bellman-flow constraints:
X
X
X X
?? (s) = ?(s) + ?
?? (s0 )
? E (s0 , a)T a (s0 , s) + ?
?? (s0 , a)T a (s0 , s)
s0 ?Se
s0 ?S\Se a?A
a?A
X
?? (s, a) = ?? (s), ?? (s, a) > 0
(7)
a?A
Se is the set of states encountered in the demonstrations, where the expert?s policy is known.
Unfortunately, the new cost function (Equation (6)) is not necessarily convex. In fact, it corresponds to a margin between two convex functions: the value of the bootstrapped expert?s policy
max??G?E wT F ? and the value of the best alternative policy max??G (wT F + l)?. Yet, a local
optimal solution of this modified cost function can be found by using the same subgradient as in
Equation (3), and replacing ??E by arg max??G?E wT F ?. In practice, as we will show in the experimental analysis, the solution returned by the bootstrapped MMP outperforms the solution of
MMP where the expert?s frequency is calculated without taking into account the known transition
probabilities. This improvement is particularly pronounced in highly stochastic environments. The
computational cost of minimizing this modified cost function is twice the one of MMP, since two
optimal policies are found at each iteration.
In the remainder of this section, we provide a theoretical analysis of the cost function given by
Equation (6). For the sake of simplicity, we consider q = 1 and ? = 0.
Proposition 2 The cost function defined by Equation (6), has at most
|A||S|
|A||Se |
different local minima.
Proof If q = 1 and ? = 0, then the cost cq (w) corresponds to a distance between the convex
and piecewise linear functions max??G (wT F + l)? and max??G?E wT F ?. Therefore, for any
vector ?0 ? G?E , the function cq is monotone in the interval of w where ?0 is optimal, i.e. where
wT F ?0 = max??G?E wT F ?. Consequently, the number of local minima of the function cq is at
most equal to the number of optimal vectors ? in G?E , which is upper bounded by the number of
deterministic policies defined on S\Se , i.e. by |A||S|?|Se | .
Consequently, the number of different local minima of the function cq decreases as the number of
states covered by the demonstration increases. Ultimately, the function cq becomes convex when the
demonstration covers all the possible states.
Theorem 1 If there exists a reward weight vector w? ? Rk , such that the expert?s policy ? E is the
only optimal policy with w? , i.e. arg max??G w? T F ? = {??E }, then there exists ? > 0 such that:
(i), the expert?s policy ? E is the only optimal policy with ?w? , and (ii), cq (?w? ) is a local minimum
of the function cq , defined in Equation (6).
Proof The set of subgradients of function cq at a point w ? Rk , denoted by ?w cq (w), corresponds to vectors F ?0 ? F ?00 , with ?0 ? arg max??G (wT F + l)? and ?00 ? arg max??G?E wT F ?.
In order that cq (w) will be a local minimum, it suffices to ensure that ~0 ? ?w cq (w), i.e.
??0 ? arg max??G (wT F + l)?, ??00 ? arg max??G?E wT F ? such that F ?0 = F ?00 . Let w? ? Rk
5
be a reward weight vector such that ? E is the only optimal policy, and let = w? T F ??E ? w? T F ?0
e|
where ?0 ? arg max??G?{??E } w? T F ?. Then, ?w? T F ??E ? ?w? T F ?0 = 2|S
1?? , where
2|Se |
(1??) .
? =
Notice that by multiplying w? by ? > 0, ? E remains the only optimal policy,
i.e. arg max??G ?w? T F ? = {??E }, and ?0 ? arg max??G?{??E } ?w? T F ?. Therefore, it suffices to show that ??E ? arg max??G (?w? T F + l)?. Indeed, max??G?{?E } (?w? T F + l)? 6
|Se |
|Se |
e|
?T
,
F ??E ? 1??
max??G?{?E } ?w? T F ?+max??G?{?E } l? 6 ?w? T F ??E ? 2|S
1?? + 1?? 6 ?w
therefore, ??E ? arg max??G (?w? T F + l)?.
6
Bootstrapping Linear Programming Apprenticeship Learning
As with MMP, the feature frequencies in LPAL can be analytically calculated only when a complete
policy ? E of the expert is provided. Alternatively, the same error bound V (?) > V (? E ) + v can be
guaranteed by setting v = mini=0,...,k?1 min?0 ??E [vi,? ?vi,?0 ], where ?E denotes the set of all the
policies that select the same actions as the expert in all the states that occurred in the demonstration,
assuming ? E is deterministic (In LPAL, ? E is not necessarily an optimal policy). Instead of enumerating all the policies of the set ?E in the constraints, note that v = mini=0,...,k?1 [vi,? ? viE ], where
def
viE = max?0 ??E vi,?0 for each feature i. Therefore, LPAL can be reformulated as maximizing the
margin mini=0,...,k?1 [vi,? ? viE ].
The maximal margin is found by solving the following linear program:
max
v,??
v
subject to
?i ? {0, . . . , k ? 1} : v 6
XX
?? (s, a)fi (s, a) ?
?? (s) = ?(s) + ?
XX
?i,?0 (s, a)fi (s, a)
s?S a?A
s?S a?A
|
XX
{z
}
vi,?
?? (s0 , a)T (s0 , a, s),
s0 ?S a?A
X
|
{z
viE
}
?? (s, a) = ?? (s), ?? (s, a) > 0
a?A
where the values viE are found by solving k separate optimization problems (k is the number of
features). For each feature i, viE is the value of the optimal policy in the set ?E under the reward
weights w defined as: wi = 1 and wj = 0, ?j 6= i.
7
Experimental Results
To validate our approach, we experimented on two simulated navigation problems: a gridworld and
two racetrack domains, taken from (Boularias & Chaib-draa, 2010). While these are not meant to be
challenging tasks, they allow us to compare our approach to other methods of apprenticeship learning, namely MMP and LPAL with Monte Carlo estimation, and a simple classification algorithm
where the action in a given state is selected by performing a majority vote on the k-nearest neighbor
states where the expert?s action is known. For each state, the distance k is gradually increased until
at least one known state is encountered. The distance between two states corresponds to the shortest
path between them with a positive probability.
7.1
Gridworld
We consider 16 ? 16 and 24 ? 24 gridworlds. The state corresponds to the location of the agent on
the grid. The agent has four actions for moving in one of the four directions of the compass. The
actions succeed with probability 0.9. The gridworld is divided into non-overlapping regions, and
the reward varies depending on the region in which the agent is located. For each region i, there is a
feature fi , where fi (s) indicates whether state s is in region i. The expert?s policy ? E corresponds
to the optimal deterministic policy found by value iteration. In all our experiments on gridworlds,
we used only 10 demonstration trajectories, which is a significantly small number compared to other
methods ( Neu & Szepesvri (2007) for example). The duration of the trajectories is 50 time-steps.
6
Size
Features
Expert
k-NN
MMP + MC
MMP + Bootstrap
LPAL + MC
LPAL + Bootstrap
16 ? 16
16 ? 16
16 ? 16
16
64
256
0.4672
0.5281
0.3988
0.4635
0.5198
0.4062
0.0000
0.0000
0.0537
0.4678
0.5252
0.3828
0.0380
0.0255
0.0555
0.1572
0.4351
0.1706
24 ? 24
24 ? 24
24 ? 24
64
144
576
0.5210
0.5916
0.3102
0.6334
0.5876
0.2814
0.0000
0.0122
0.0974
0.5217
0.5252
0.0514
0.0149
0.0400
0.0439
0.2767
0.4432
0.0349
Table 1: Gridworld average reward results
Table 1 shows the average reward per step of the learned policy, averaged over 103 independent trials
of the same duration as the demonstration trajectories. Our first observation is that Bootstrapped
MMP learned policies just as good as the expert?s policy, while both MMP and LPAL using Monte
Carlo (MC) estimator remarkably failed to collect any reward. This is due to the fact that we used a
very small number of demonstrations (10 ? 50 time-steps) compared to the size of these problems.
Note that this problem is not specific to MMP or LPAL. In fact, any other algorithm using the same
approximation method would produce similar results. The second observation is that the values of
the policies learned by bootstrapped LPAL were between the values of LPAL with Monte Carlo
and the optimal ones. In fact, the policy learned by the bootstrapped LPAL is one that minimizes
the difference between the expected frequency of a feature using this policy and the maximal one
among all the policies that resemble to the expert?s policy. Therefore, the learned policy maximizes
the frequency of a feature that is not necessary a good one (with a high reward weight). We also
notice that the performance of all the tested algorithms was low when 576 features were used. In
this case, every feature takes a non null weight in one state only. Therefore, the demonstrations did
not provide enough information about the rewards of the states that were not visited by the expert.
Finally, we remark that k-NN performed as an expert in this experiment. In fact, since there are no
obstacles on the grid, neighboring states often have similar optimal actions.
7.2
Racetrack
We implemented a simplified car race simulator, a detailed description of the corresponding racetracks was provided in (Boularias & Chaib-draa, 2010). The states correspond to the position of the
car on the racetrack and its velocity. For racetrack (1), the car always starts from the same initial
position, and the duration of each demonstration trajectory is 20 time-steps. For racetrack (2), the
car starts at a random position, and the length of each trajectory is 40 time-steps. A high reward
is given for reaching the finish line, a low cost is associated to each movement, and high cost is
associated to driving off-road (or hitting an obstacle). Figure 2 (a-f) shows the average reward per
step of the learned policies, the average proportion of off-road steps, and the average number of
steps before reaching the finish line, as a function of the number of trajectories in the demonstration. We first notice that k-NN performed poorly, this is principally caused by the effect of driving
off-road on both the cumulated reward and the velocity of the car. In this context, neighbor states
do not necessarily share the same optimal action. Contrary to the gridworld experiments, MMP
with Monte Carlo achieved good performances on racetrack (1). In fact, by fixing the initial state,
the demonstration covers most of the reachable states, and the feature frequencies are accurately
estimated from the demonstration. On racetrack (2) however, MMP with MC was unable to learn a
good policy because all the states were reachable from the initial distribution. Similarly, LPAL with
both MC and bootstrapping failed to achieve good results on racetracks (1) and (2). This is due to
the fact that LPAL tries to maximize the frequency of features that are not necessary associated to
a high reward, such as hitting obstacles. Finally, we notice the nearly optimal performance of the
bootstrapped MMP, on both racetracks (1) and (2).
8
Conclusion and Future Work
The main question of apprenticeship learning is how to generalize the expert?s policy to states that
have not been encountered during the demonstration. Inverse Reinforcement Learning (IRL) provides an efficient answer which consists in first learning a reward function that explains the observed
behavior, and then using it for the generalization. A strong assumption considered in IRL-based al7
20
Expert
MMP + MC
MMP + Bootstrapping
LPAL + MC
LPAL + Bootstrapping
k?NN
16
14
12
10
8
32
30
28
26
24
22
6
20
4
2
4
6
8
10
Number of trajectories in the demonstration
2
12
0.5
Average reward per step
0.3
0.2
0.1
0
8
10
12
Expert
MMP + MC
MMP + Bootstrapping
LPAL + MC
LPAL + Bootstrapping
k?NN
15
10
5
0
2
4
6
8
10
Number of trajectories in the demonstration
12
2
Expert
MMP + MC
MMP + Bootstrapping
LPAL + MC
LPAL + Bootstrapping
k?NN
60
50
40
30
20
2
4
6
8
10
Number of trajectories in the demonstration
4
6
8
10
Number of trajectories in the demonstration
12
(d) Average reward in racetrack 2
Average number of hitted obstacles per step
(c) Average number of off-roads, racetrack 1
Average number of steps
6
(b) Average number of steps in racetrack 1
20
Expert
MMP + MC
MMP + Bootstrapping
LPAL + MC
LPAL + Bootstrapping
k?NN
0.4
4
Number of trajectories in the demonstration
(a) Average reward in racetrack 1
Average number of hitted obstacles per step
Expert
MMP + MC
MMP + Bootstrapping
LPAL + MC
LPAL + Bootstrapping
k?NN
34
Average number of steps
Average reward per step
18
Expert
MMP + MC
MMP + Bootstrapping
LPAL + MC
LPAL + Bootstrapping
k?NN
1.4
1.2
1
0.8
0.6
0.4
0.2
0
2
12
(e) Average number of steps in racetrack 2
4
6
8
10
Number of trajectories in the demonstration
12
(f) Average number of off-roads, racetrack 2
Figure 2: Racetrack results
gorithms is that the reward is a linear function of state-action features, and the frequencies of these
features can be estimated from a few demonstrations even if these demonstrations cover only a small
part of the state space. In this paper, we showed that this assumption does not hold in highly stochastic systems. We also showed that this problem can be solved by modifying the cost function so that
the value of the learned policy is compared to the exact value of a generalized expert?s policy. We
also provided theoretical insights on the modified cost function, showing that it admits the expert?s
true reward as a locally optimal solution, under mild conditions. The empirical analysis confirmed
the outperformance of Bootstrapped MMP in particular. These promising results push us to further
investigate the theoretical properties of the modified cost function.
As a future work, we mainly target to compare this approach with the one proposed by Ratliff et al.
(2007), where the base features are boosted by using a classifier.
8
References
Abbeel, Pieter and Ng, Andrew Y. Apprenticeship Learning via Inverse Reinforcement Learning. In
Proceedings of the Twenty-first International Conference on Machine Learning (ICML?04), pp.
1?8, 2004.
Boularias, Abdeslam and Chaib-draa, Brahim. Apprenticeship Learning via Soft Local Homomorphisms. In Proceedings of 2010 IEEE International Conference on Robotics and Automation
(ICRA?10), pp. 2971?2976, 2010.
Neu, Gergely and Szepesvri, Csaba. Apprenticeship Learning using Inverse Reinforcement Learning
and Gradient Methods. In Conference on Uncertainty in Artificial Intelligence (UAI?07), pp. 295?
302, 2007.
Ng, Andrew and Russell, Stuart. Algorithms for Inverse Reinforcement Learning. In Proceedings of
the Seventeenth International Conference on Machine Learning (ICML?00), pp. 663?670, 2000.
Ramachandran, Deepak and Amir, Eyal. Bayesian Inverse Reinforcement Learning. In Proceedings
of The twentieth International Joint Conference on Artificial Intelligence (IJCAI?07), pp. 2586?
2591, 2007.
Ratliff, N., Bagnell, J., and Zinkevich, M. Maximum Margin Planning. In Proceedings of the
Twenty-third International Conference on Machine Learning (ICML?06), pp. 729?736, 2006.
Ratliff, Nathan, Bradley, David, Bagnell, J. Andrew, and Chestnutt, Joel. Boosting Structured
Prediction for Imitation Learning. In Advances in Neural Information Processing Systems 19
(NIPS?07), pp. 1153?1160, 2007.
Syed, Umar and Schapire, Robert. A Game-Theoretic Approach to Apprenticeship Learning. In
Advances in Neural Information Processing Systems 20 (NIPS?08), pp. 1449?1456, 2008.
Syed, Umar, Bowling, Michael, and Schapire, Robert E. Apprenticeship Learning using Linear
Programming. In Proceedings of the Twenty-fifth International Conference on Machine Learning
(ICML?08), pp. 1032?1039, 2008.
9
| 4160 |@word mild:1 trial:1 briefly:1 proportion:1 pieter:1 homomorphism:1 initial:4 bootstrapped:8 outperforms:1 bradley:1 current:1 yet:1 written:1 designed:1 intelligence:2 selected:1 amir:2 provides:2 boosting:1 location:1 simpler:1 consists:4 inside:1 introduce:1 apprenticeship:20 indeed:1 expected:5 behavior:2 mpg:1 planning:9 simulator:1 bellman:4 discounted:2 decreasing:1 little:1 becomes:2 provided:5 estimating:1 xx:8 bounded:3 maximizes:2 notation:1 null:1 minimizes:2 lowing:1 finding:2 csaba:1 bootstrapping:17 every:1 k2:22 classifier:1 control:1 planck:1 positive:2 vie:6 before:1 local:7 path:1 twice:1 specifying:2 challenging:1 collect:1 limited:1 averaged:1 seventeenth:1 practical:1 practice:4 bootstrap:2 empirical:5 significantly:2 road:5 cannot:4 context:1 zinkevich:1 deterministic:6 demonstrated:6 maximizing:3 duration:3 convex:6 focused:1 simplicity:1 estimator:2 insight:1 resp:2 target:1 exact:1 programming:7 velocity:2 approximated:1 satisfying:1 particularly:1 located:1 racing:1 observed:1 solved:1 worst:1 wj:1 region:4 russell:4 decrease:1 movement:1 dama:1 environment:2 comm:1 reward:54 dynamic:3 ultimately:1 solving:5 completely:2 abdeslam:3 easily:1 joint:1 fast:1 describe:1 monte:6 artificial:2 heuristic:1 widely:1 posed:2 solve:1 ward:1 advantage:2 sequence:1 propose:1 maximal:3 remainder:1 neighboring:1 poorly:1 achieve:1 description:1 pronounced:1 validate:1 ijcai:1 produce:1 object:1 help:2 depending:1 andrew:3 fixing:1 nearest:1 received:1 strong:1 recovering:3 predicted:1 resemble:1 indicate:1 implies:1 implemented:1 direction:1 radius:2 modifying:1 stochastic:4 centered:2 explains:1 require:1 brahim:2 abbeel:5 suffices:2 generalization:1 preliminary:1 proposition:5 biological:1 hold:2 sufficiently:1 around:2 considered:1 driving:3 adopt:1 estimation:3 visited:1 sensitive:1 tool:2 always:2 aim:1 modified:5 rather:1 reaching:2 boosted:1 corollary:1 derived:2 focus:1 improvement:1 indicates:1 mainly:1 inference:1 szepesvri:2 vl:23 nn:9 a0:1 relation:1 manipulating:1 selects:1 germany:1 mimicking:1 arg:13 classification:1 ill:2 among:1 denoted:2 k6:1 equal:1 ng:9 manually:1 kw:1 stuart:1 icml:4 nearly:2 future:2 minimized:2 piecewise:2 few:1 modern:1 divergence:3 replaced:2 highly:3 investigate:1 joel:1 navigation:1 sh:1 accurate:1 tuple:2 closer:1 necessary:3 draa:4 unless:2 desired:1 re:1 theoretical:3 instance:1 increased:1 soft:1 obstacle:5 compass:1 cover:5 a6:1 cost:21 deviation:1 too:1 answer:1 varies:1 st:5 international:6 off:5 michael:1 gergely:1 boularias:5 expert:47 return:4 account:1 de:1 wk:2 automation:1 explicitly:1 race:1 vi:19 caused:1 performed:2 try:1 closed:1 eyal:1 fol:1 tof:1 recover:1 start:2 complicated:1 lpal:30 efficiently:2 correspond:2 generalize:1 bayesian:1 accurately:1 mc:17 carlo:6 trajectory:15 multiplying:1 confirmed:1 cybernetics:1 ah:1 neu:2 definition:2 outperformance:1 frequency:28 pp:9 associated:4 proof:3 chaib:5 subsection:1 car:6 knowledge:1 higher:1 done:2 just:1 until:1 ramachandran:2 irl:10 replacing:1 gridworlds:3 overlapping:1 defines:1 quality:2 mdp:2 effect:2 true:2 regularization:2 analytically:2 gw:2 during:2 interchangeably:1 game:1 bowling:1 ulaval:1 criterion:1 generalized:1 complete:3 demonstrate:1 theoretic:1 novel:1 fi:12 laval:1 overview:1 occurred:2 grid:2 similarly:1 pointed:1 reachable:2 moving:1 robot:1 encountering:1 surface:2 base:1 racetrack:17 closest:1 showed:3 ubingen:1 minimum:5 additional:1 shortest:1 maximize:1 monotonically:1 ii:2 full:1 match:2 sphere:1 divided:1 visit:1 a1:1 prediction:1 iteration:2 robotics:2 achieved:1 whereas:1 remarkably:1 interval:1 subject:3 quebec:1 contrary:1 flow:5 near:3 easy:1 enough:1 finish:2 reduce:1 tm:1 enumerating:1 whether:1 utility:1 returned:4 reformulated:1 action:23 remark:1 se:9 informally:1 covered:1 detailed:1 discount:1 locally:1 reduced:1 schapire:3 notice:5 estimated:9 delta:1 per:6 four:2 nevertheless:1 urban:1 verified:1 subgradient:4 monotone:1 sum:2 inverse:7 uncertainty:1 almost:2 decision:3 def:2 bound:3 followed:1 guaranteed:1 encountered:4 kronecker:1 constraint:7 sake:1 nathan:1 min:1 subgradients:1 performing:1 cies:1 department:2 structured:1 according:1 combination:5 ball:1 wi:3 gradually:1 principally:1 taken:1 equation:13 remains:1 slack:1 count:4 available:1 generalizes:1 chestnutt:1 alternative:4 denotes:1 ensure:1 calculating:1 umar:2 approximating:2 icra:1 objective:1 already:1 question:1 bagnell:2 navigating:1 gradient:1 distance:4 separate:1 unable:1 simulated:1 majority:1 nondifferentiable:1 tuebingen:1 assuming:5 length:1 cq:15 mini:5 demonstration:30 minimizing:1 unfortunately:3 executed:1 robert:2 stated:1 ratliff:7 policy:78 twenty:3 perform:1 upper:2 observation:3 markov:3 sm:2 finite:2 defining:2 gridworld:5 canada:1 david:1 namely:2 specified:2 learned:11 pletely:1 nip:2 able:1 beyond:1 appeared:1 program:3 max:30 including:1 syed:6 rely:1 regularized:1 scarce:1 occupancy:4 improve:1 older:1 mdps:1 loss:6 proportional:1 penalization:1 agent:3 sufficient:1 s0:19 share:1 ift:1 last:1 infeasible:1 allow:1 institute:1 neighbor:2 taking:1 deepak:1 fifth:1 overcome:1 calculated:7 transition:5 cumulative:1 reinforcement:7 simplified:1 approximate:3 uai:1 alternatively:1 imitation:1 continuous:1 table:2 promising:1 learn:3 robust:1 ca:1 necessarily:3 domain:1 did:2 pk:1 tween:1 main:1 augmented:1 ispthe:1 gorithms:1 position:3 mmp:35 outdoor:1 third:1 admissible:1 rk:5 theorem:1 specific:1 showing:1 experimented:1 admits:1 exists:2 cumulated:1 execution:1 illustrates:1 occurring:1 push:1 margin:14 horizon:2 easier:1 generalizing:1 simply:1 twentieth:1 failed:2 hitting:2 partially:1 corresponds:6 succeed:1 fringe:2 consequently:4 feasible:2 infinite:2 determined:1 wt:29 experimental:2 vote:1 formally:1 select:1 meant:1 tested:1 handling:1 |
3,491 | 4,161 | Probabilistic Deterministic Infinite Automata
David Pfau
Nicholas Bartlett
Frank Wood
Columbia University, New York, NY 10027, USA
{pfau@neurotheory,{bartlett,fwood}@stat}.columbia.edu
Abstract
We propose a novel Bayesian nonparametric approach to learning with probabilistic deterministic finite automata (PDFA). We define and develop a sampler for a
PDFA with an infinite number of states which we call the probabilistic deterministic infinite automata (PDIA). Posterior predictive inference in this model, given
a finite training sequence, can be interpreted as averaging over multiple PDFAs of
varying structure, where each PDFA is biased towards having few states. We suggest that our method for averaging over PDFAs is a novel approach to predictive
distribution smoothing. We test PDIA inference both on PDFA structure learning
and on both natural language and DNA data prediction tasks. The results suggest
that the PDIA presents an attractive compromise between the computational cost
of hidden Markov models and the storage requirements of hierarchically smoothed
Markov models.
1
Introduction
The focus of this paper is a novel Bayesian framework for learning with probabilistic deterministic
finite automata (PDFA) [9]. A PDFA is a generative model for sequential data (PDFAs are reviewed
in Section 2). Intuitively a PDFA is similar to a hidden Markov model (HMM) [10] in that it
consists of a set of states, each of which when visited emits a symbol according to an emission
probability distribution. It differs from an HMM in how state-to-state transitions occur; transitions
are deterministic in a PDFA and nondeterministic in an HMM.
In our framework for learning with PDFAs we specify a prior over the parameters of a single large
PDFA that encourages state reuse. The inductive bias introduced by the PDFA prior provides a soft
constraint on the number of states used to generate the data. We take the limit as the number of states
becomes infinite, yielding a model we call the probabilistic deterministic infinite automata (PDIA).
Given a finite training sequence, the PDIA posterior distribution is an infinite mixture of PDFAs.
Samples from this distribution form a finite sample approximation to this infinite mixture, and can
be drawn via Markov chain Monte Carlo (MCMC) [6]. Using such a mixture we can average over
our uncertainty about the model parameters (including state cardinality) in a Bayesian way during
prediction and other inference tasks. We find that averaging over a finite number of PDFAs trained
on naturalistic data leads to better predictive performance than using a single ?best? PDFA.
We chose to investigate learning with PDFAs because they are intermediate in expressive power between HMMs and finite-order Markov models, and thus strike a good balance between generalization performance and computational efficiency. A single PDFA is known to have relatively limited
expressivity. We argue that a finite mixture of PDFAs has greater expressivity than that of a single
PDFA but is not as expressive as a probabilistic nondeterministic finite automata (PNFA)1 . A PDIA
is clearly highly expressive; an infinite mixture over the same is even more so. Even though ours is
a Bayesian approach to PDIA learning, in practice we only ever deal with a finite approximation to
the full posterior and thus limit our discussion to finite mixtures of PDFAs.
1
PNFAs with no final probability are equivalent to hidden Markov models [3]
1
While model expressivity is a concern, computational considerations often dominate model choice.
We show that prediction in a trained mixture of PDFAs can have lower asymptotic cost than forward
prediction in the PNFA/HMM class of models. We also present evidence that averaging over PDFAs
gives predictive performance superior to HMMs trained with standard methods on naturalistic data.
We find that PDIA predictive performance is competitive with that of fixed-order, smoothed Markov
models with the same number of states. While sequence learning approaches such as the HMM
and smoothed Markov models are well known and now highly optimized, our PDIA approach to
learning is novel and is amenable to future improvement.
Section 2 reviews PDFAs, Section 3 introduces Bayesian PDFA inference, Section 4 presents experimental results on DNA and natural language, and Section 5 discusses related work on PDFA
induction and the theoretical expressive power of mixtures of PDFAs. In Section 6 we discuss ways
in which PDIA predictive performance might be improved in future research.
2
Probabilistic Deterministic Finite Automata
A PDFA is formally defined as a 5-tuple M = (Q, ?, ?, ?, q0 ), where Q is a finite set of states, ? is a
finite alphabet of observable symbols, ? : Q ? ? ? Q is the transition function from a state/symbol
pair to the next state, ? : Q ? ? ? [0, 1] is the probability of the next symbol given a state and q0 is
the initial state.2 Throughout this paper we will use i to index elements of Q, j to index elements of
?, and t to index elements of an observed string. For example, ?ij is shorthand for ?(qi , ?j ), where
qi ? Q and ?j ? ?.
Given a state qi , the probability that the next symbol takes the value ?j is given by ?(qi , ?j ). We
use the shorthand ? qi for the state-specific discrete distribution over symbols for state qi . We can
also write ?|qi ? ? qi where ? is a random variable that takes values in ?. Given a state qi and a
symbol ?j , however, the next state qi0 is deterministic: qi0 = ?(qi , ?j ). Generating from a PDFA
involves first generating a symbol stochastically given the state the process is in: xt |?t ? ? ?t where
?t ? Q is the state at time t. Next, given ?t and xt transitioning deterministically to the next state:
?t+1 = ?(?t , xt ). This is the reason for the confusing ?probabilistic deterministic? name for these
models. Turning this around, given data, q0 , and ?, there is no uncertainty about the path through
the states. This is a primary source of computational savings relative to HMMs.
PDFAs are more general than nth-order Markov models (i.e. m-gram models, m = n + 1), but less
expressive than hidden Markov models (HMMs)[3]. For the case of nth-order Markov models, we
can construct a PDFA with one state per suffix x1 x2 . . . xn . Given a state and a symbol xn+1 , the
unique next state is the one corresponding to the suffix x2 . . . xn+1 . Thus nth-order Markov models
are a subclass of PDFAs with O(|?|n ) states. For an HMM, given data and an initial distribution
over states, there is a posterior probability for every path through the state space. PDFAs are those
HMMs for which, given a unique start state, the posterior probability over paths is degenerate at a
single path. As we explain in Section 5, mixtures of PDFAs are strictly more expressive than single
PDFAs, but still less expressive than PNFAs.
3
Bayesian PDFA Inference
We start our description of Bayesian PDFA inference by defining a prior distribution over the parameters of a finite PDFA. We then show how to analytically marginalize nuisance parameters out
of the model and derive a Metropolis-Hastings sampler for posterior inference using the resulting
collapsed representation. We discuss the limit of our model as the number of states in the PDFA goes
to infinity. We call this limit the probabilistic deterministic infinite automaton (PDIA). We develop
a PDIA sampler that carries over from the finite case in a natural way.
3.1
A PDFA Prior
We assume that the set of states Q, set of symbols ?, and initial state q0 of a PDFA are known but
that the transition and emission functions are unknown. The PDFA prior then consists of a prior
over both the transition function ? and the emission probability function ?. In the finite case ? and
2
In general q0 may be replaced by a distribution over initial states.
2
? are representable as finite matrices, with one column per element of ? and one row per element
of Q. For each column j (j co-indexes columns and set elements) of the transition matrix ?, our
prior stipulates that the elements of that column are i.i.d. draws from a discrete distribution ?j over
Q, that is, ?ij ? [?1 , . . . , ?|?| ], 0 ? i ? |Q| ? 1. The ?j represent transition tendencies given
a symbol, if the ith element of ?j is large then state qi is likely to be transitioned to anytime the
last symbol was ?j . The ?j ?s are themselves given a shared Dirichlet prior with parameters ??,
where ? is a concentration and ? is a template transition probability vector. If the ith element of ?
is large then the ith state is likely to be transitioned to regardless of the emitted symbol. We place
a uniform Dirichlet prior on ? itself, with ? total mass and average over ? during inference. This
hierarchical Dirichlet construction encourages both general and context specific state reuse. We also
place a uniform Dirichlet prior over the per-state emission probabilities ? qi with ? total mass which
smooths emission distribution estimates. Formally:
?|?, |Q| ? Dir (?/|Q|, . . . , ?/|Q|)
?j |?, ? ? Dir(??)
? qi |?, |?| ?
?ij ?
(1)
(2)
Dir(?/|?|, . . . , ?/|?|)
?j
where 0 ? i ? |Q| ? 1 and 1 ? j ? |?|. Given a sample from this model we can run the PDFA
to generate a sequence of T symbols. Using ?t to denote the state of the PDFA at position t in the
sequence:
?0 = q0 ,
x0 ? ? q0 ,
?t = ?(?t?1 , xt?1 ),
xt ? ? ?t
We choose this particular inductive bias, with transitions tied together within a column of ?, because
we wanted the most recent symbol emission to be informative about what the next state is. If we
instead had a single Dirichlet prior over all elements of ?, transitions to a few states would be highly
likely no matter the context and those states would dominate the behavior of the automata. If we
tied together rows of ? instead of columns, being in a particular state would tell us more about the
sequence of states we came from than the symbols that got us there.
Note that this prior stipulates a fully connected PDFA in which all states may transition to all others
and all symbols may be emitted from each state. This is slightly different that the canonical finite
state machine literature where sparse connectivity is usually the norm.
3.2
PDFA Inference
Given observational data, we are interested in learning a posterior distribution over PDFAs. We do
this by GIbbs sampling the transition matrix ? with ? and ?j integrated out. To start inference we
need the likelihood function for a fixed PDFA; it is given by
p(x0:T |?, ?) = ?(?0 , x0 )
T
Y
?(?t , xt ).
t=1
Remember that ?t |?t?1 , xt?1 is deterministic given the transition function ?. We can marginalize ?
out of this expression and express the likelihood of the data in a form that depends only on the counts
of symbols emitted from each state. Define the count matrix c for the sequence x0:T and transition
PT
matrix ? as cij =
t=0 Iij (?t , xt ), where Iij (?t , xt ) is an indicator function for the automaton
being in state qi when it generates xt , i.e. ?t = qi and xt = ?j . This matrix c = [cij ] gives the
number of times each symbol is emitted from each state. Due to multinomial-Dirichlet conjugacy
we can express the probability of a sequence given the transition function ?, the count matrix c and
?:
Q|?|
Z
?
|Q|?1
Y
?(?)
j=1 ?( |?| + cij )
p(x0:T |?, c, ?) =
p(x0:T |?, ?)p(?|?)d? =
(3)
P|?|
? |?|
?( |?|
)
?(? + j=1 cij )
i=0
If the transition matrix ? is observed we have a closed-form expression for its likelihood given ?
with all ?j ?s marginalized out. Let vij be the number of times state qi is transitioned to given that
?j was the last symbol emitted, i.e. vij is the number of times ?i0 j = qi for all states i0 in the column
3
j. The marginal likelihood of ? in terms of ? is then:
Z
p(?|?, ?)
=
p(?|?)p(?|?, ?)d? =
|?|
Y
j=1
?(?)
Q|Q|?1
i=0
?(??i )
Q|Q|?1
?(??i + vij )
?(? + |Q|)
i=0
(4)
We perform posterior inference in the finite model by sampling elements of ? and the vector ?. One
can sample ?ij given the rest of the matrix ??ij using
p(?ij |??ij , x0:T , ?, ?) ? p(x0:T |?ij , ??ij )p(?ij |??ij , ?, ?)
(5)
Both terms on the right hand side of this equation have closed-form expressions, the first given in
(3). The second can be found from (4) and is
P (?ij = qi0 |??ij , ?, ?) =
??i0 + vi0 j
? + |Q| ? 1
(6)
where vi0 j is the number of elements in column j equal to qi0 excluding ?ij . As |Q| is finite,
we compute (5) for all values of ?ij and normalize to produce the required conditional probability
distribution.
Note that in (3), the count matrix c may be profoundly impacted by changing even a single element
of ?. The values in c depend on the specific sequence of states the automata used to generate x.
Changing the value of a single element of ? affects the state trajectory the PDFA must follow to
generate x0:T . Among other things this means that some elements of c that were nonzero may
become zero, and vice versa.
We can reduce the computational cost of inference by deleting transitions ?ij for which the corresponding counts cij become 0. In practical sampler implementations this means that one need not
even represent transitions corresponding to zero counts. The likelihood of the data (3) does not depend on the value of ?ij if symbol ?j is never emitted while the machine is in state qi . In this case
sampling from (5) is the same as sampling without conditioning on the data at all. Thus, if while
sampling we change some transition that renders cij = 0 for some values for each of i and j, we can
delete ?ij until another transition is changed such that cij becomes nonzero again, when we sample
?ij anew. Under the marginal joint distribution of a column of ? the row entries in that column are
exchangeable, and so deleting an entry of ? has the same effect as marginalizing it out. When all
?ij for some state qi are marginalized out, we can say the state itself is marginalized out. When
we delete an element from a column of ?, we replace the |Q| ? 1 in the denominator of (6) with
P|Q|?1
Dj+ = i=0 I(vij 6= 0), the number of entries in the jth column of ? that are not marginalized
out yielding
??i0 + vi0 j
.
(7)
P (?ij = qi0 |??ij , ?, ?) =
? + Dj+
If when sampling ?ij it is assigned it a state qi0 such that some ci0 j 0 which was zero is now nonzero,
we simply reinstantiate ?i0 j 0 by drawing from (7) and update Dj+0 . When sampling a single ?ij
there can be many such transitions as the path through the machine dictated by x0:T may use many
transitions in ? that were deleted. In this case we update incrementally, increasing Dj+ and vij as we
go.
While it is possible to construct a Gibbs sampler using (5) in this collapsed representation, such a
sampler requires a Monte Carlo integration over a potentially large subset of the marginalized-out
transitions in ?, which may be costly. A simpler strategy is to pretend that all entries of ? exist but
are sampled in a ?just-in-time? manner. This gives rise to a Metropolis Hastings (MH) sampler for ?
where the proposed value for ?ij is either one of the instantiated states or any one of the equivalent
marginalized out states. Any time any marginalized out element of ? is required we can pretend as
if we had just sampled its value, and we know that because its value had no effect on the likelihood
of the data, we know that it would have been sampled directly from (7). It is in this sense that all
marginalized out states are equivalent ? we known nothing more about their connectivity structure
than that given by the prior in (7).
For the MH sampler, denote the set of non-marginalized out ? entries ? + = {?ij : cij > 0}. We
propose a new value qi? for one ?ij ? ? + according to (7). The conditional posterior probability
4
PDIA PDIA-MAP HMM-EM bigram trigram 4-gram 5-gram 6-gram SSM
5.13
5.46
7.89
9.71
6.45
5.13
4.80
4.69
4.78
365.6
379
52
28
382
2,023 5,592 10,838 19,358
DNA 3.72
3.72
3.76
3.77
3.75
3.74
3.73
3.72
3.56
64.7
54
19
5
21
85
341
1,365 314,166
AIW
Table 1: PDIA inference performance relative to HMM and fixed order Markov models. Top rows:
perplexity. Bottom rows: number of states in each model. For the PDIA this is an average number.
+
+
of this proposal is proportional to p(x0:T |?ij = qi? , ??ij
)P (?ij = qi? |??ij
). The Hastings correction exactly cancels out the proposal probability in the accept/reject ratio leaving an MH accept
probability for the ?ij being set to qi? given that its previous value was qi0 of
!
+
p(x0:T |?ij = qi? , ??ij
)
?(?ij = qi? |?ij = qi0 ) = min 1,
.
(8)
+
p(x0:T |?ij = qi0 , ??ij
)
+
Whether qi? is marginalized out or not, evaluating p(x0:T |?ij = qi? , ??ij
) may require reinstantiating marginalized out elements of ?. As before, these values are sampled from (7) on a just-in-time
schedule. If the new value is accepted, all ?ij ? ? + for which cij = 0 are removed, and then move
to the next transition in ? to sample.
In the finite case, one can sample ? by Metropolis-Hastings or use a MAP estimate as in [7]. Hyperparameters ?, ? and ? can be sampled via Metropolis-Hastings updates. In our experiments we
use Gamma(1,1) hyperpriors.
3.3
The Probabilistic Deterministic Infinite Automaton
We would like to avoid placing a strict upper bound on the number of states so that model complexity
can grow with the amount of training data. To see how to do this, consider what happens when
|Q| ? ?. In this case, the right hand side of equations (1) and (2) must be replaced by infinite
dimensional alternatives
? ? PY(?, d0 , H)
?j ? PY(?, d, ?)
?ij ? ?j
where PY stands for Pitman Yor process and H in our case is a geometric distribution over the
integers with parameter ?. The resulting hierarchical model becomes the hierarchical Pitman-Yor
process (HPYP) over a discrete alphabet [14]. The discount parameters d0 and d are particular to the
infinite case, and when both are zero the HPYP becomes the well known hierarchical Dirichlet process (HDP), which is the infinite dimensional limit of (1) and (2) [15]. Given a finite amount of data,
there can only be nonzero counts for a finite number of state/symbol pairs, so our marginalization
procedure from the finite case will yield a ? with at most T elements. Denote these non-marginalized
out entries by ? + . We can sample the elements of ? + as before using (8) provided that we can propose from the HPYP. In many HPYP sampler representations this is easy to do. We use the Chinese
restaurant franchise representation [15] in which the posterior predictive distribution of ?ij given
+
??ij
can be expressed with ?j and ? integrated out as
"
#
0 ? ?i0 d0
0 j ? k i0 j d
?
+
k
d
w
?
+
?
d
v
?j
i
?
0
i
+
+
+
H(qi0 )
(9)
P (?ij = qi0 |??ij
, ?, ?) = E
? + w?
? + w?
? + Dj+
? + Dj+
P
P
P
where wi0 , ki0 j , ?i0 , w? = i wi , k?j = i kij , and ?? = i ?i are stochastic bookkeeping counts
required by the Chinese Restaurant franchise sampler. These counts must themselves be sampled
[15]. The discount hyperparameters can also be sampled by Metropolis-Hastings.
4
Experiments and Results
To test our PDIA inference approach we evaluated it on discrete natural sequence prediction and
compared its performance to HMMs and smoothed n-gram models. We trained the models on two
5
4
x 10
450
?1.86
Log Likelihood
?1.9
400
?1.92
375
?1.94
?1.96
States
425
?1.88
350
?1.98
325
0.5
1
1.5
Iterations
2
2.5
3
4
x 10
Figure 1: Subsampled PDIA sampler trace for Alice in Wonderland. The top trace is the joint log
likelihood of the model and training data, the bottom trace is the number of states.
datasets: a character sequence from Alice in Wonderland [2] and a short sequence of mouse DNA.
The Alice in Wonderland (AIW) dataset was preprocessed to remove all characters but letters and
spaces, shift all letters from upper to lower case, and split along sentence dividers to yield a 27character alphabet (a-z and space). We trained on 100 random sentences (9,986 characters) and
tested on 50 random sentences (3,891 characters). The mouse DNA dataset consisted of a fragment
of chromosome 2 with 194,173 base pairs, which we treated as a single unbroken string. We used
the first 150,000 base pairs for training and the rest for testing. For AIW, the state of the PDIA
model was always set to q0 at the start of each sentence. For DNA, the state of the PDIA model at
the start of the test data was set to the last state of the model after accepting the training data. We
placed Gamma(1,1) priors over ?, ? and ?, set ? = .001, and used uniform priors for d0 and d.
We evaluated the performance of the learned models by calculating the average per character predictive perplexity of the test data. For training data x1:T and test data y1:T 0 this is given by
1
2? T 0 log2 P (y1:T 0 |x1:T ) . It is a measure of the average uncertainty the model has about what character
comes next given the sequence up to that point, and is at most |?|. We evaluated the probability of
the test data incrementally, integrating the test data into the model in the standard Bayesian way.
Test perplexity results are shown in Table 1 on the first line of each subtable. Each sample passed
through every instantiated transition. Every fifth sample for AIW and every tenth sample for DNA
after burn-in was used for prediction. For AIW, we ran 15,000 burn-in samples and used 3,500
samples for predictive inference. Subsampled sampler diagnostic plots are shown in Figure 1 that
demonstrate the convergence properties of our sampler. When modeling the DNA dataset we burn-in
for 1,000 samples and use 900 samples for inference. For the smoothed n-gram models, we report
thousand-sample average perplexity results for hierarchical Pitman-Yor process (HPYP) [14] models
of varying Markov order (1 through 5 notated as bigram through 6-gram) after burning each model
in for one hundred samples. We also show the performance of the single particle incremental variant
of the sequence memoizer (SM) [5], the SM being the limit of an n-gram model as n ? ?. We also
show results for a hidden Markov model (HMM) [8] trained using expectation-maximization (EM).
We determined the best number of hidden states by cross-validation on the test data (a procedure
used here to produce optimistic HMM performance for comparison purposes only).
The performance of the PDIA exceeds that of the HMM and is approximately equal to that of
a smoothed 4-gram model, though it does not outperform very deep, smoothed Markov models.
This is in contrast to [16], which found that PDFAs trained on natural language data were able to
predict as well as unsmoothed trigrams, but were significantly worse than smoothed trigrams, even
when averaging over multiple learned PDFAs. As can be seen in the second line of each subtable
in Table 1, the MAP number of states learned by the PDIA is significantly lower than that of the
n-gram model with equal predictive performance.
Unlike the HMM, the computational complexity of PDFA prediction does not depend on the number
of states in the model because only a single path through the states is followed. This means that the
asymptotic cost of prediction for the PDIA is O(LT 0 ), where L is the number of posterior samples
and T 0 is the length of the test sequence. For any single HMM it is O(KT 0 ), where K is the number
of states in the HMM. This is because all possible paths must be followed to achieve the given HMM
predictive performance (although a subset of possible paths could be followed if doing approximate
6
(a)
(b)
0
0
B/0.2
A/0.5
A/0.5
1
2
A/0.8
A/0.6
B/0.4 B/0.2
A/0.5
A/0.5
1
2
B/0.4
A/0.8 A/0.6
3
Figure 2: Two PNFAs outside the class of PDFAs. (a) can be represented by a mixture of two
PDFAs, one following the right branch from state 0, the other following the left branch. (b), in
contrast, cannot be represented by any finite mixture of PDFAs.
inference). In PDIA inference we too can choose the number of samples used for prediction, but
here even a single sample has empirical prediction performance superior to averaging over all paths
in an HMM. The computational complexity of smoothing n-gram inference is equivalent to PDIA
inference, however, the storage cost for the large n-gram models is significantly higher than that of
the estimated PDIA for the same predictive performance.
5
Theory and Related Work
The PDIA posterior distribution takes the form of an infinite mixture of PDFAs. In practice, we
run a sampler for some number of iterations and approximate the posterior with a finite mixture
of PDFAs. For this reason, we now consider the expressive power of finite mixtures of PDFAs.
We show that they are strictly more expressive than PDFAs, but strictly less expressive than hidden
Markov models. Probabilistic non-deterministic finite automata (PNFA) are a strictly larger model
class than PDFAs. For example, the PNFA in 2(a) cannot be expressed as a PDFA [3]. However,
it can be expressed as a mixture of two PDFAs, one with Q = {q0 , q1 , q3 } and the other with
Q = {q0 , q2 , q3 }. Thus mixtures of PDFAs are a strictly larger model class than PDFAs. In general,
any PNFA where the nondeterministic transitions can only be visited once can be expressed as a
mixture of PDFAs. However, if we replace transitions to q3 with transitions to q0 , as in 2(b), there
is no longer any equivalent finite mixture of PDFAs, since the nondeterministic branch from q0 can
be visited an arbitrary number of times.
Previous work on PDFA induction has focused on accurately discovering model structure when the
true generative mechanism is a PDFA. State merging algorithms do this by starting with the trivial
PDFA that only accepts the training data and merging states that pass a similarity test [1, 17], and
have been proven to identify the correct model in the limit of infinite data. State splitting algorithms
start at the opposite extreme, with the trivial single-state PDFA, and split states that pass a difference
test [12, 13]. These algorithms return only a deterministic estimate, while ours naturally expresses
uncertainty about the learned model.
To test if we can learn the generative mechanism given our inductive bias, we trained the PDIA on
data from three synthetic grammars: the even process [13], the Reber grammar [11] and the Feldman
grammar [4], which have up to 7 states and 7 symbols in the alphabet. In each case the mean number
of states discovered by the model approached the correct number as more data was used in training.
Results are presented in Figure 3. Furthermore, the predictive performance of the PDIA was nearly
equivalent to the actual data generating mechanism.
6
Discussion
Our Bayesian approach to PDIA inference can be interpreted as a stochastic search procedure for
PDFA structure learning where the number of states is unknown. In Section 5 we presented evidence
that PDFA samples from our PDIA inference algorithm have the same characteristics as the true
generative process. This in and of itself may be of interest to the PDFA induction community.
7
B/0.8125
S/0.6
2
X/0.4
4
0
A/0.5
1
0
B/1.0
1
T/0.7
B/1.0
X/0.5
P/0.5
3
6
P/0.5
V/0.3
5
to 0
E/1.0
V/0.5
A/0.25
from 6
S/0.5
A/0.5625
2
1
0
B/0.4375
T/0.5
B/0.5
A/0.5625
A/0.1875
4
A/
3
A/0.1875
6
B/0.0625
B/0.4375
0.7
5
5
B/0
.25
A/0.9375
B/0.75
B/0.8125
(a) Even
(b) Reber
(c) Feldman
8
7
States
6
Even Process
5
Feldman Grammar
4
Reber Grammar
3
2
1
10^1
10^2
10^3
Observations
10^4
10^5
10^6
(d) Posterior marginal PDIA state cardinality distribution
Figure 3: Three synthetic PDFAs: (a) even process [13], (b) Reber grammar [11], (c) Feldman
grammar [4]. (d) posterior mean and standard deviation of number of states discovered during PDIA
inference for varying amounts of data generated by each of the synthetic PDFAs. PDIA inference
discovers PDFAs with the correct number of states
We ourselves are more interested in establishing new ways to produce smoothed predictive conditional distributions. Inference in the PDIA presents a completely new approach to smoothing,
smoothing by averaging over PDFA model structure rather than hierarchically smoothing related
emission distribution estimates. Our PDIA approach gives us an attractive ability to trade-off between model simplicity in terms of number of states, computational complexity in terms of asymptotic cost of prediction, and predictive perplexity. While our PDIA approach may not yet outperform
the best smoothing Markov model approaches in terms of predictive perplexity alone, it does outperform them in terms of model complexity required to achieve the same predictive perplexity, and
outperforms HMMs in terms of asymptotic time complexity of prediction. This suggests that a
future combination of smoothing over model structure and smoothing over emission distributions
could produce excellent results. PDIA inference gives researchers another tool to choose from when
building models. If very fast prediction is desirable and the predictive perplexity difference between
the PDIA and, for instance, the most competitive n-gram is insignificant from an application perspective, then doing finite sample inference in the PDIA offers a significant computational advantage
in terms of memory.
We indeed believe the most promising approach to improving PDIA predictive performance is to
construct a smoothing hierarchy over the state specific emission distributions, as is done in the
smoothing n-gram models. For an n-gram, where every state corresponds to a suffix of the sequence,
the predictive distributions for a suffix is smoothed by the predictive distribution for a shorter suffix,
for which there are more observations. This makes it possible to increase the size of the model indefinitely without generalization performance suffering [18]. In the PDIA, by contrast, the predictive
probabilities for states are not tied together. Since states of the PDIA are not uniquely identified
by suffixes, it is no longer clear what the natural smoothing hierarchy is. It is somewhat surprising
that PDIA learning works nearly as well as n-gram modeling even without a smoothing hierarchy
for its emission distributions. Imposing a hierarchical smoothing of the PDIA emission distributions
remains an open problem.
8
References
[1] R. Carrasco and J. Oncina. Learning stochastic regular grammars by means of a state merging method.
Grammatical Inference and Applications, pages 139?152, 1994.
[2] L. Carroll.
Alice?s Adventures in Wonderland.
http://www.gutenberg.org/etext/11.
Macmillan,
1865.
URL
[3] P. Dupont, F. Denis, and Y. Esposito. Links between probabilistic automata and hidden Markov models:
probability distributions, learning models and induction algorithms. Pattern recognition, 38(9):1349?
1371, 2005.
[4] J. Feldman and J.F. Hanna. The structure of responses to a sequence of binary events. Journal of Mathematical Psychology, 3(2):371?387, 1966.
[5] J. Gasthaus, F. Wood, and Y. W. Teh. Lossless compression based on the Sequence Memoizer. In Data
Compression Conference 2010, pages 337?345, 2010.
[6] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian data analysis. Chapman & Hall, New
York, 1995.
[7] D. J. C. MacKay and L.C. Bauman Peto. A hierarchical Dirichlet language model. Natural language
engineering, 1(2):289?307, 1995.
[8] K. Murphy.
Hidden Markov model (HMM) toolbox for Matlab,
http://www.cs.ubc.ca/ murphyk/Software/HMM/hmm.html.
2005.
URL
[9] M.O. Rabin. Probabilistic automata. Information and control, 6(3):230?245, 1963.
[10] L. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77:257?286, 1989.
[11] A.S. Reber. Implicit learning of artificial grammars. Journal of verbal learning and verbal behavior, 6
(6):855?863, 1967.
[12] D. Ron, Y. Singer, and N. Tishby. The power of amnesia: Learning probabilistic automata with variable
memory length. Machine learning, 25(2):117?149, 1996.
[13] C.R. Shalizi and K.L. Shalizi. Blind construction of optimal nonlinear recursive predictors for discrete
sequences. In Proceedings of the 20th conference on Uncertainty in Artificial Intelligence, pages 504?511.
UAI Press, 2004.
[14] Y. W. Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of
the Association for Computational Linguistics, pages 985?992, 2006.
[15] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[16] F. Thollard. Improving probabilistic grammatical inference core algorithms with post-processing techniques. In Eighteenth International Conference on Machine Learning, pages 561?568, 2001.
[17] F. Thollard, P. Dupont, and C. del la Higuera. Probabilistic DFA inference using Kullback-Leibler divergence and minimality. In Seventeenth International Conference on Machine Learning, pages 975?982.
Citeseer, 2000.
[18] F. Wood, C. Archambeau, J. Gasthaus, L. James, and Y. W. Teh. A stochastic memoizer for sequence data.
In Proceedings of the 26th International Conference on Machine Learning, pages 1129?1136, Montreal,
Canada, 2009.
9
| 4161 |@word compression:2 bigram:2 norm:1 open:1 citeseer:1 q1:1 carry:1 initial:4 fragment:1 ours:2 outperforms:1 surprising:1 yet:1 must:4 informative:1 dupont:2 wanted:1 remove:1 plot:1 update:3 alone:1 generative:4 discovering:1 selected:1 intelligence:1 ith:3 short:1 core:1 accepting:1 memoizer:3 indefinitely:1 blei:1 provides:1 denis:1 ron:1 ssm:1 simpler:1 org:1 mathematical:1 along:1 become:2 amnesia:1 consists:2 shorthand:2 nondeterministic:4 manner:1 x0:14 indeed:1 behavior:2 themselves:2 actual:1 cardinality:2 increasing:1 becomes:4 provided:1 mass:2 what:4 interpreted:2 string:2 q2:1 remember:1 every:5 subclass:1 exactly:1 exchangeable:1 murphyk:1 control:1 before:2 engineering:1 fwood:1 limit:7 establishing:1 path:9 approximately:1 might:1 chose:1 burn:3 suggests:1 unbroken:1 alice:4 co:1 archambeau:1 hmms:7 limited:1 seventeenth:1 unique:2 practical:1 testing:1 practice:2 recursive:1 thollard:2 differs:1 procedure:3 empirical:1 got:1 reject:1 significantly:3 integrating:1 regular:1 suggest:2 naturalistic:2 cannot:2 marginalize:2 gelman:1 storage:2 collapsed:2 context:2 py:3 www:2 equivalent:6 deterministic:14 map:3 eighteenth:1 go:2 regardless:1 starting:1 automaton:16 focused:1 simplicity:1 splitting:1 dominate:2 construction:2 pt:1 hierarchy:3 element:20 recognition:2 ci0:1 carrasco:1 observed:2 bottom:2 thousand:1 connected:1 trade:1 removed:1 ran:1 complexity:6 trained:8 depend:3 compromise:1 predictive:22 efficiency:1 completely:1 joint:2 mh:3 represented:2 alphabet:4 instantiated:2 fast:1 monte:2 artificial:2 approached:1 tell:1 outside:1 larger:2 say:1 drawing:1 grammar:9 ability:1 itself:3 final:1 beal:1 sequence:20 advantage:1 propose:3 degenerate:1 achieve:2 description:1 normalize:1 convergence:1 requirement:1 produce:4 generating:3 incremental:1 franchise:2 derive:1 develop:2 montreal:1 stat:1 ij:46 c:1 involves:1 come:1 correct:3 qi0:11 stochastic:4 observational:1 require:1 shalizi:2 generalization:2 strictly:5 correction:1 around:1 hall:1 predict:1 trigram:3 purpose:1 visited:3 vice:1 tool:1 clearly:1 always:1 rather:1 avoid:1 varying:3 q3:3 focus:1 emission:11 improvement:1 likelihood:8 contrast:3 sense:1 inference:30 suffix:6 i0:8 integrated:2 accept:2 hidden:10 interested:2 among:1 html:1 smoothing:13 integration:1 mackay:1 marginal:3 equal:3 construct:3 saving:1 having:1 never:1 sampling:7 once:1 chapman:1 placing:1 cancel:1 nearly:2 future:3 others:1 report:1 few:2 gamma:2 divergence:1 murphy:1 subsampled:2 replaced:2 ourselves:1 interest:1 investigate:1 highly:3 introduces:1 mixture:18 extreme:1 yielding:2 chain:1 amenable:1 kt:1 tuple:1 shorter:1 vi0:3 theoretical:1 delete:2 kij:1 column:12 soft:1 modeling:2 instance:1 rabin:1 maximization:1 cost:6 deviation:1 subset:2 entry:6 uniform:3 hundred:1 predictor:1 too:1 gutenberg:1 tishby:1 dir:3 synthetic:3 international:3 minimality:1 probabilistic:16 off:1 together:3 mouse:2 connectivity:2 again:1 choose:3 worse:1 stochastically:1 american:1 return:1 matter:1 depends:1 blind:1 hpyp:5 closed:2 optimistic:1 doing:2 competitive:2 start:6 characteristic:1 yield:2 identify:1 rabiner:1 bayesian:11 accurately:1 carlo:2 trajectory:1 researcher:1 explain:1 james:1 naturally:1 emits:1 sampled:7 dataset:3 notated:1 anytime:1 schedule:1 higher:1 follow:1 specify:1 improved:1 impacted:1 response:1 evaluated:3 though:2 done:1 furthermore:1 just:3 implicit:1 until:1 hand:2 hastings:6 expressive:10 nonlinear:1 incrementally:2 del:1 believe:1 usa:1 building:1 name:1 effect:2 divider:1 consisted:1 true:2 inductive:3 analytically:1 assigned:1 wi0:1 q0:12 nonzero:4 leibler:1 deal:1 attractive:2 during:3 encourages:2 nuisance:1 uniquely:1 higuera:1 demonstrate:1 adventure:1 consideration:1 novel:4 discovers:1 superior:2 bookkeeping:1 multinomial:1 conditioning:1 association:2 wonderland:4 significant:1 versa:1 gibbs:2 feldman:5 imposing:1 particle:1 language:6 had:3 dj:6 longer:2 similarity:1 carroll:1 base:2 posterior:15 recent:1 dictated:1 perspective:1 perplexity:8 binary:1 came:1 seen:1 greater:1 somewhat:1 strike:1 branch:3 multiple:2 full:1 desirable:1 d0:4 smooth:1 exceeds:1 ki0:1 cross:1 offer:1 post:1 reber:5 qi:27 prediction:13 variant:1 denominator:1 expectation:1 iteration:2 represent:2 proposal:2 grow:1 source:1 leaving:1 biased:1 rest:2 unlike:1 strict:1 thing:1 jordan:1 call:3 emitted:6 integer:1 intermediate:1 split:2 easy:1 affect:1 marginalization:1 restaurant:2 psychology:1 carlin:1 identified:1 opposite:1 reduce:1 shift:1 whether:1 expression:3 bartlett:2 url:2 reuse:2 passed:1 render:1 speech:1 york:2 aiw:5 matlab:1 deep:1 dfa:1 clear:1 amount:3 nonparametric:1 discount:2 dna:8 generate:4 http:2 outperform:3 exist:1 canonical:1 tutorial:1 diagnostic:1 estimated:1 per:5 stipulates:2 discrete:5 write:1 profoundly:1 express:3 drawn:1 deleted:1 changing:2 preprocessed:1 tenth:1 wood:3 run:2 letter:2 uncertainty:5 place:2 throughout:1 draw:1 confusing:1 esposito:1 bound:1 followed:3 occur:1 constraint:1 infinity:1 x2:2 software:1 generates:1 min:1 unsmoothed:1 relatively:1 according:2 combination:1 representable:1 slightly:1 em:2 character:7 wi:1 metropolis:5 happens:1 intuitively:1 equation:2 conjugacy:1 remains:1 discus:3 count:9 mechanism:3 singer:1 know:2 hyperpriors:1 hierarchical:9 nicholas:1 alternative:1 top:2 dirichlet:9 linguistics:1 log2:1 marginalized:12 calculating:1 pretend:2 chinese:2 move:1 strategy:1 primary:1 concentration:1 costly:1 burning:1 link:1 hmm:19 argue:1 trivial:2 reason:2 induction:4 hdp:1 length:2 index:4 ratio:1 balance:1 cij:9 potentially:1 frank:1 trace:3 rise:1 implementation:1 stern:1 unknown:2 perform:1 teh:4 upper:2 observation:2 markov:21 datasets:1 sm:2 finite:31 defining:1 peto:1 ever:1 excluding:1 y1:2 discovered:2 gasthaus:2 smoothed:10 arbitrary:1 community:1 canada:1 david:1 introduced:1 pair:4 required:4 toolbox:1 optimized:1 sentence:4 pfau:2 learned:4 accepts:1 expressivity:3 able:1 usually:1 pattern:1 including:1 memory:2 deleting:2 power:4 event:1 natural:7 treated:1 turning:1 indicator:1 bauman:1 nth:3 lossless:1 columbia:2 prior:15 review:1 literature:1 geometric:1 marginalizing:1 asymptotic:4 relative:2 fully:1 proportional:1 proven:1 validation:1 rubin:1 vij:5 row:5 changed:1 placed:1 last:3 jth:1 verbal:2 bias:3 side:2 template:1 fifth:1 sparse:1 pitman:4 yor:4 grammatical:2 xn:3 transition:28 gram:16 evaluating:1 stand:1 forward:1 subtable:2 approximate:2 observable:1 kullback:1 anew:1 uai:1 search:1 reviewed:1 table:3 transitioned:3 learn:1 chromosome:1 promising:1 ca:1 improving:2 hanna:1 excellent:1 hierarchically:2 hyperparameters:2 nothing:1 suffering:1 x1:3 neurotheory:1 ny:1 iij:2 position:1 deterministically:1 tied:3 transitioning:1 specific:4 xt:11 symbol:23 insignificant:1 concern:1 evidence:2 sequential:1 merging:3 lt:1 simply:1 likely:3 expressed:4 macmillan:1 corresponds:1 ubc:1 conditional:3 towards:1 shared:1 replace:2 change:1 infinite:15 determined:1 sampler:14 averaging:7 total:2 pas:2 accepted:1 experimental:1 tendency:1 la:1 formally:2 mcmc:1 tested:1 |
3,492 | 4,162 | Decontaminating Human Judgments
by Removing Sequential Dependencies
Michael C. Mozer,? Harold Pashler,? Matthew Wilder,?
Robert V. Lindsey,? Matt C. Jones,? & Michael N. Jones?
?
Dept. of Computer Science, University of Colorado
?
Dept. of Psychology, UCSD
?
Dept. of Psychology, University of Colorado
?
Dept. of Psychological and Brain Sciences, Indiana University
Abstract
For over half a century, psychologists have been struck by how poor people are at
expressing their internal sensations, impressions, and evaluations via rating scales.
When individuals make judgments, they are incapable of using an absolute rating
scale, and instead rely on reference points from recent experience. This relativity
of judgment limits the usefulness of responses provided by individuals to surveys,
questionnaires, and evaluation forms. Fortunately, the cognitive processes that
transform internal states to responses are not simply noisy, but rather are influenced by recent experience in a lawful manner. We explore techniques to remove
sequential dependencies, and thereby decontaminate a series of ratings to obtain
more meaningful human judgments. In our formulation, decontamination is fundamentally a problem of inferring latent states (internal sensations) which, because of the relativity of judgment, have temporal dependencies. We propose a
decontamination solution using a conditional random field with constraints motivated by psychological theories of relative judgment. Our exploration of decontamination models is supported by two experiments we conducted to obtain
ground-truth rating data on a simple length estimation task. Our decontamination
techniques yield an over 20% reduction in the error of human judgments.
1
Introduction
Suppose you are asked to make a series of moral judgments by rating, on a 1?10 scale, various
actions, with a rating of 1 indicating ?not particularly bad or wrong? and a rating of 10 indicating
?extremely evil.? Consider the series of actions on the left.
(1) Stealing a towel from a hotel
(10 ) Testifying falsely for pay
(2) Keeping a dime you find on the ground
(20 ) Using guns on striking workers
(3) Poisoning a barking dog
(30 ) Poisoning a barking dog
Now consider that instead you had been shown the series on the right. Even though individuals are
asked to make absolute judgments, the mean rating of statement (3) in the first context is reliably
higher than the mean rating of the identical statement (30 ) in the second context (Parducci, 1968).
The classic explanation of this phenomenon is cast in terms of anchoring or primacy: information
presented early in time serves as a basis for making judgments later in time (Tversky & Kahneman,
1974). In the Netflix contest, significant attention was paid to anchoring effects by considering that
an individual who gives high ratings early in a session is likely to be biased toward higher ratings
later in a session (Koren, August 2009; Ellenberg, March 2008).
The need for anchors comes from the fact that individuals are poor at or incapable of making absolute
judgments and instead must rely on reference points to make relative judgments (e.g., Laming, 1984;
Parducci, 1965, 1968; Stewart, Brown, & Chater, 2005). Where do these reference points come
from? There is a rich literature in experimental and theoretical psychology exploring sequential
1
dependencies suggesting that reference points change from one trial to the next in a systematic
manner. (We use the psychological jargon ?trial? to refer to a single judgment or rating in a series.)
Sequential dependencies occur in many common tasks in which an individual is asked to make
a series of responses, such as filling out surveys, questionnaires, and evaluations (e.g., usability
ratings, pain assessment inventories). Every faculty member is aware of drift in grading that necessitates comparing papers graded early on a stack with those graded later. Recency effects have been
demonstrated in domains as varied as legal reasoning and jury evidence interpretation (Furnham,
1986; Hogarth & Einhorn, 1992) and clinical assessments (Mumma & Wilson, 2006).
However, the most carefully controlled laboratory studies of sequential dependencies, dating back
to the the 1950?s (discussed by Miller, 1956), involve the rating of unidimensional stimuli, such as
the loudness of a tone or the length of a line. Human performance at rating stimuli is surprisingly
poor compared to an individual?s ability to discriminate the same stimuli. Regardless of the domain,
responses convey not much more than 2 bits of mutual information with the stimulus (Stewart et
al., 2005). Different types of judgment tasks have been studied including absolute identification,
in which the individual?s task is to specify the distinct stimulus level (e.g., 10 levels of loudness),
magnitude estimation, in which the task is to estimate the magnitude of a stimulus which may vary
continuously along a dimension, and categorization which is a hybrid task requiring individuals to
label stimuli by range. Because the number of responses in absolute identification and categorization
tasks is often quite large, and because individuals are often not aware of the discreteness of stimuli in
absolute identification tasks, there isn?t a qualitative difference among tasks. Feedback is typically
provided, especially in absolute identification and categorization tasks. Without feedback, there are
no explicit anchors against which stimuli can be assessed.
The pattern of sequential effects observed is complex. Typically, experimental trial t, trial t ? 1 has a
large influence on ratings, and trials t ? 2, t ? 3, etc., have successively diminishing influences. The
influence of recent trials is exerted by both the stimuli and responses, a fact which makes sense in
light of the assumption that individuals form their response on the current trial by analogy to recent
trials (i.e., they determine a response to the current stimulus that has the same relationship as the
previous response had to the previous stimulus). Both assimilation and contrast effects occur: an
assimilative response on trial t occurs when the response moves in the direction of the stimulus or
response on trial t ? k; a contrastive response is one that moves away. Interpreting recency effects
in terms of assimilation and contrast is nontrivial and theory dependent (DeCarlo & Cross, 1990).
Many mathematical models have been developed to explain the phenomena of sequential effects in
judgment tasks. All adopt the assumption that the transduction of a stimulus to its internal representation is veridical. We refer to this internal representation as the sensation, as distinguished from the
external stimulus. (For judgments of nonphysical quantities such as emotional states and affinities,
perhaps the terms impression or evaluation would be more appropriate than sensation.) Sequential
dependencies and other corruptions of the representation occur in the mapping of the sensation to a
response. According to all theories, this mapping requires reference to previous sensation-response
pairings. However, the theories differ with respect to the reference set. At one extreme, the theory of
Stewart et al. (2005) assumes that only the previous sensation-response pair matters. Other theories
assume that multiple sensation-response anchors are required, one fixed and unchanging and another
varying from trial to trial (e.g., DeCarlo & Cross, 1990). And in categorization and absolute identification tasks, some theories posit anchors for each distinct response, which are adjusted trial-to-trial
(e.g., Petrov & Anderson, 2005). Range-frequency theory (Parducci, 1965) claims that sequential
effects arise because the sensation-response mapping is adjusted to utilize the full response range,
and to produce roughly an equal number of responses of each type. This effect is the consequence
of many other theories, either explicitly or implicitly.
Because recent history interacts with the current stimulus to determine an individual?s response,
responses have a complex relationship with the underlying sensation, and do not provide as much
information about the internal state of the individual as one would hope. In the applied psychology
literature, awareness of sequential dependencies has led some researchers to explore strategies that
mitigate relativity of judgment, such as increasing the number of response categories and varying
the type and frequency of anchors (Mumma & Wilson, 2006; Wedell, Parducci, & Lane, 1990).
In contrast, our approach to extracting more information from human judgments is to develop automatic techniques that recover the underlying sensation from a response that has been contaminated
2
by cognitive processes producing the response. We term this recovery process decontamination. As
we mentioned earlier, there is some precedent in the Netflix competition for developing empirical
approaches to decontamination. However, to the best of our knowledge, the competitors were not
focused on trial-to-trial effects, and their investigation was not systematic. Systematic investigation
requires ground truth knowledge of the individuals? sensations.
2
Experiments
To collect ground-truth data for use in the design of decontamination techniques, we conducted two
behavioral experiments using stimuli whose magnitudes could be objectively determined. In both
experiments, participants were asked to judge the horizontal gap between two vertically aligned
dots on a computer monitor. The position of the dots on the monitor shifted randomly from trial
to trial. Participants were asked to respond to each dot pair using a 10-point rating scale, with 1
corresponding to the smallest gap they would see, and 10 corresponding to the largest.
The task requires absolute identification of 10 distinct gaps. The participants were only told that
their task was to judge the distance between the dots. They were not told that only 10 unique stimuli
were presented, and were likely unaware of this fact (memory of exact absolute gaps is too poor), and
thus the task is indistinguishable from a magnitude estimation or categorization task in which the gap
varied continuously. The experiment began with a practice block of ten trials. During the practice
block, participants were shown every one of the ten gaps in random order, and simultaneous with the
stimulus they were told?via text on the screen below the dots?the correct classification. After the
practice blocks, no further feedback was provided. Although the psychology literature is replete with
line-length judgment studies (two recent examples: Lacouture, 1997; Petrov & Anderson, 2005), the
vast majority provide feedback to participants on at least some trials beyond the practice block. We
wanted to avoid the anchoring provided by feedback in order that the task is more analogous to
the the type of survey tasks we wish to decontaminate, e.g., the Netflix movie scores. Another
distinction between our experiments and previous experiments is an attempt to carefully control the
sequence structure, as described next.
2.1
Experiment Methodology
In Experiment 1, the practice block was followed by 2 blocks of 90 trials. Within a block, the trial
sequence was arranged such that each gap was preceded exactly once by each other gap, with the
exception that no repetitions occurred. Further, every ten trials in a block consisted of exactly one
presentation of each gap. In Experiment 2, the practice block was followed by 2 blocks of 100 trials.
The constraint on the sequence in Experiment 2 was looser than in Experiment 1: within a block,
each gap occurred exactly once preceded by each other gap. However, repetitions were included, and
there was no constraint on the subblocks of ten trials. The other key difference between experiments
was the gap lengths. In Experiment 1, gap g, with g ? {1, 2, ...10} spanned a proportion .08g of the
screen width. In Experiment 2, gap g spanned a proportion .061 + .089g of the screen width. The
main reason for conducting Experiment 2 was that we found the gaps used in Experiment 1 resulted
in low error rates and few sequential effects for the smaller gaps. Other motivations for Experiment
2 will be explained later.
Both experiments were conducted via the web, using a web portal set up for psychology studies.
Participants were prescreened for their ability to understand English instructions, and were paid $4
for the 10?15 minutes required to complete the experiment. Two participants in Experiment 1 and
one participant in Experiment 2 were excluded from data analysis because their accuracy was below
20%. The portal was opened for long enough to obtain good data from 76 participants in each
Experiment. Individuals were allowed to participate in only one of the two experiments.
2.2
Results and Discussion of Human Experiments
Figure 1 summarizes the data from Experiments 1 and 2 (top and bottom rows, respectively). All
graphs depict the error on a trial, defined as the signed difference Rt ? St between the current
response, Rt , and the current stimulus level St . The left column plots the error on trial t as a function
of St?1 (along the abscissa) and St (the different colored lines, as specified by the key between the
graphs). Pairs of stimulus gaps (e.g., G1 and G2) have been grouped together to simplify the graph.
3
error as a function of S(t?1) and S(t)
error as a function of stimulus difference
1
error as a function of lagged stimulus
?0.1
0.2
P(R(t)?S(t))
R(t) ? S(t)
Experiment 1
?0.2
?0.4
?0.6
?0.8
?0.2
< ?1
?1
0
1
>1
0.6
0.4
R(t)?S(t)
0.8
0
0.2
?0.3
?0.4
?0.5
?1
0
G1,G2 G3,G4 G5,G6 G7,G8 G9,G10
S(t?1)
G1,G2
?0.6
?9?8?7?6?5?4?3?2?1 0 1 2 3 4 5 6 7 8 9
S(t)?S(t?1)
1
2
3
lag
4
0
?0.5
0.2
0.8
0.1
0.6
0
R(t)?S(t)
P(R(t)?S(t))
R(t) ? S(t)
Experiment 2
0.5
1
0.4
0.2
3
lag
4
5
G1,G2
G3,G4
G5,G6
G7,G8
G9,G10
G3,G4
G5,G6
G7,G8
G9,G10
?0.1
?0.2
?1
G1,G2 G3,G4 G5,G6 G7,G8 G9,G10
S(t?1)
0
?9?8?7?6?5?4?3?2?1 0 1 2 3 4 5 6 7 8 9
S(t)?S(t?1)
?0.3
1
2
5
Figure 1: Human data from Experiments 1 (top row) and 2 (bottom row).
The small bars around the point indicate one standard error of the mean. The variation along the
abscissa reflects sequential dependencies: assimilation is indicated by pairs of points with positive
slopes (larger values of St?1 result in larger Rt ), and contrast is indicated by negative slopes. The
pattern of results across the two experiments is remarkably consistent.
The middle column shows another depiction of sequential dependencies by characterizing the distribution of errors (Rt ? St ? {> 1, 1, 0, ?1, < ?1}) as a function of St ? St?1 . The predominance of
assimilative responses is reflected in more Rt > St responses when St ? St?1 < 0, and vice-versa.
The rightmost column presents the lag profile that characterizes how the stimulus on trial t ? k for
k = 1...5 influences the response on trial t. The bars on each point indicate one standard error of
the mean. For the purpose of the current work, most relevant is that sequential dependencies in this
task may stretch back two or three trials.
3
Approaches To Decontamination
From a machine learning perspective, decontamination can be formulated in at least three different
ways. First, it could be considered an unsupervised infomax problem of determining a sensation
associated with each distinct stimulus such that the sensation sequence has high mutual information
with the response sequence. Second, it could be considered a supervised learning problem in which
a specialized model is constructed for each individual, using some minimal amount of ground-truth
data collected from that individual. Here, the ground truth is the stimulus-sensation correspondence,
which can be obtained?in principle, even with unknown stimuli?by laborious data collection techniques, such as asking individuals to provide a full preference ordering or multiple partial orderings
over sets of stimuli, or asking individuals to provide multiple ratings of a stimulus in many different
contexts, so as to average out sequential effects. Third, decontamination models could be built based
on ground-truth data for one group of individuals and then tested on another group. In this paper,
we adopt this third formulation of the problem.
Formally, the decontamination problem involves inferring the sequence of (unobserved) sensations
given the complete response sequence. To introduce some notation, let Rtp1 ,t2 denote the sequence
of responses made by participant p on trials t1 through t2 when shown a sequence of stimuli that
4
evoke the sensation sequence Stp1 ,t2 .1 Decontamination can be cast as computing the expectation or
p
p
probability over S1,T
given R1,T
, where T is the total number of judgments made by the individual.
Although psychological theories of human judgment address an altogether different problem?that
p
p
of predicting Rtp , the response on trial t, given S1,t
and R1,t?1
?they can inspire decontamination
techniques. Two classes of psychological theories correspond to two distinct function approximation
techniques. Many early models of sequential dependencies, culminating in the work of DeCarlo and
Cross (1990), are framed in terms of autoregression. In contrast, other models favor highly flexible,
nonlinear approaches that allow for similarity-based assimilation and contrast, and independent representations for each response label (e.g., Petrov & Anderson, 2005). Given the discrete stimuli and
responses, a lookup table seems the most general characterization of these models.
We explore a two-dimensional space of decontamination techniques. The first dimension of this
space is the model class: regression, lookup table, or an additive hybrid. We define our regression
model estimating St as:
REG t (m, n)
= ? + ? ? Rt?m+1,t + ? ? St?n,t?1 ,
(1)
where the model parameters ? and ? are vectors, and ? is a scalar. Similarly, we define our lookup
table LUTt (m, n) to produce an estimate of St by indexing over the m responses Rt?m+1,t and the
n sensations St?n,t?1 . Finally, we define an additive hybrid, REG?LUT(m, n) by first constructing
a regression model, and then building a lookup table on the residual error, St ? REGt (m, n). The
motivation for the hybrid is the complementarity of the two models, the regression model capturing
linear regularities and the lookup table representing arbitrary nonlinear relationships.
The second dimension in our space of decontamination techniques specifies how inference is handled. Decontamination is fundamentally a problem of inferring unobserved states. To utilize any
of the models above for n > 0, sensations St?n,t?1 must be estimated. Although time flows in
one direction, inference flows in two: in psychological models, Rt is influenced by both St and
St?1 ; this translates to a dependence of St on both St?1 and St+1 when conditioned on R1,T . To
handle inference properly, we construct a linear-chain conditional random field (Lafferty, McCallum, & Pereira, 2001; Sutton & McCallum, 2007). As an alternative to the conditional random field
(hereafter, CRF), we also consider a simple approach in which we simply set n = 0 and discard the
sensation terms in our regression and lookup tables. At the other extreme, we can assume an oracle
that provides St?n,t?1 ; this oracle approach offers an upper bound on achievable performance.
We explore the full Cartesian product of approaches consisting of models chosen from
{REG, LUT, REG?LUT} and inference techniques chosen from {SIMPLE, CRF, ORACLE}. The
SIMPLE and ORACLE approaches are straightforward classic statistics, but we need to explain how
the different models are incorporated into a CRF. The linear-chain CRF is a distribution
( T K
)
XX
1
P (S1,T |R1,T ) =
exp
?k fk (t, St?1,t , R1,T )
(2)
Z(R1,T )
t=1
k=1
with a given set of feature functions, {fk }. The linear combination of these functions determines the
potential at some time t, denoted ?t , where a higher potential reflects a more likely configuration
of variables. To implement a CRF - REG model, we would like the potential to be high when the
regression equation is satisfied, e.g., ?t = ?(REGt (m, n) ? St )2 . Simply expanding this error
yields a collection of first and second order terms. Folding the terms not involving the sensations
into the normalization constant, the following terms remain for REG(2, 1): St , Rt St , St2 , Rt St?1 ,
Rt?1 St , and St St?1 .2 The regression potential function can be obtained by making each of these
terms into a real-valued feature, and determining the ? parameters in Equation 2 to yield the ?, ?,
and ? parameters in Equation 1.3
The CRF - LUT model could be implemented using indicator features, as is common in CRF models,
but this approach yields an explosion of free parameters: a feature would be required for each cell of
1
We are switching terminology: in the discussion of our experiment, S refers to the stimulus. In the discussion of decontamination, S will refer to the sensation. The difference is minor because the stimulus and
sensation are in one-to-one correspondence.
2
2
The terms Rt?1 St?1 and St?1
are omitted because they correspond to Rt St and St2 , respectively.
3
As we explain shortly, the {?k } are determined by CRF training; our point here is that the CRF has the
capacity to represent a least-squares regression solution.
5
the table and each value of St , yielding 104 free parameters for a gap detection task with a modest
CRF - LUT (2, 1). Instead, we opted for the direct analog of the CRF - REG : encouraging configurations
in which St is consistent with LUTt (m, n) via potential ?t = ?(LUTt (m, n) ? St )2 . This approach
yields three real-valued features: LUTt (m, n)2 , St 2 , and LUTt (m, n)St . (Remember that lookup
table values are indexed by St?1 , and therefore cannot be folded into the normalization constant.)
Finally, the CRF - REG?LUT is a straightforward extension of the models we?ve described, based on
the potential ?t = ?(REGt (m, n) + LUTt (m, n) ? St )2 , which still has only quadratic terms in
St and St?1 . Having now described a 3 ? 3 space of decontamination approaches, we turn to the
details of our decontamination experiments.
3.1
Debiasing and Decompressing
Although our focus is on decontaminating sequential dependencies, or desequencing, the quality
of human judgments can be reduced by at least three other factors. First, individuals may have an
overall bias toward smaller or larger ratings. Second, individuals may show compression, possibly
nonlinear, of the response range. Third, there may be slow drift in the center or spread of the
response range, on a relatively long time scale. All of these factors are likely to be caused at least in
part by trial-to-trial sequential effects. For example, compression will be a natural consequence of
assimilation because the endpoints of the response scale will move toward the center. Nonetheless
we find it useful to tease apart the factors that are easy to describe (bias, compression) from those
that are more subtle (assimilation, contrast).
In the data from our two experiments, we found no evidence of drift, as determined by the fact that
regression models with moving averages of the responses did not improve predictions. This finding
is not terribly surprising given that the entire experiment took only 10?15 minutes to complete.
We briefly describe how we remove bias and compression from our data. Decompression can be
achieved with a LUT(1, 0), which maps each response into the expected sensation. For example, in
Experiment 1, the shortest stimuli reported as G1 and G2 with high accuracy, but the longest stimuli
tended to be underestimated by all participants. The LUT(1, 0) compensates for this compression
by associating responses G8 and G9 with higher sensation levels if the table entries are filled based
on the training data according to: LUTt (1, 0) ? E[St |Rt ]. All of the higher order lookup tables,
LUT (m, n), for m ? 1 and n ? 0, will also perform nonlinear decompression in the same manner.
The REG models alone will also achieve decompression, though only linear decompression.
We found ample evidence of individual biases in the use of the response
scale. To debias the data,
? p ? 1/T P Rtp , and ensure the means
we compute the mean response of a particular participant p, R
? p = Stp ? S?p . Assuming that the mean sensation is
are homogeneous via the constraint Rtp ? R
identical for all participants?as it should be in our experiments?debiasing can be incorporated
? p |Rt ...], and recovering the
into the lookup tables by storing not E[St |Rt ...], but rather E[Stp + R
? p . (This trick is necessary to index into
sensation for a particular individual using LUT(m, n) ? R
the lookup table with discrete response levels. Simply normalizing individuals? responses will yield
? p term to
noninteger responses.) Debiasing of the regression models can be achieved by adding a R
the regression. Note that this extra term?whether in the lookup table retrieval or the regression?
? p and St , St?1 , and LUT(m, n) being
results in additional features involving combinations of R
added to the three CRF models.
3.2
Modeling Methodology
In all the results we report on, we use a one-back response history, i.e., m = 2. Therefore, the
SIMPLE models are REG (2, 0), LUT (2, 0), and REG ? LUT (2, 0), the ORACLE and CRF models are
REG (2, 1), LUT (2, 1), and REG ? LUT (2, 1). In the ORACLE models, St?1 is assumed to be known
when St is estimated; in the CRF models, the sensations are all inferred. The models are trained
via multiple splits of the available data into equal-sized training and test sets (38 participants per
set). Parameters of the SIMPLE - REG and ORACLE - REG models are determined by least-squares
regression on the training set. Entries in the SIMPLE - LUT and ORACLE - LUT are the expectation over
? p |Rt , Rt?1 , ...]. The SIMPLE - REG?LUT and ORACLE - REG?LUT
trials and participants: E[Stp + R
models are trained first by obtaining the regression coefficients, and then filling lookup table entries
with the expected residual, E[Stp ? REGpt |Rt , Rt?1 , ...]. For the CRF models, the feature coefficients
{?k } are obtained via gradient descent and the forward-backward algorithm, as detailed in Sutton
6
Experiment 1
Experiment 2
baseline
baseline
decompress
decompress
debias
debias
debias + decompress
debias + decompress
CRF?REG?LUT(2,1)
CRF?REG?LUT(2,1)
0.9
0.95
1
1.05
1.1
1.15
sensation reconstruction error (RMSE)
p < .001
SIMPLE?LUT(2,0)
CRF?LUT(2,1)
ORACLE?LUT(2,1)
SIMPLE?REG?LUT(2,0)
CRF?REG?LUT(2,1)
ORACLE?REG?LUT(2,1)
0.88
0.9
0.92
0.94
sensation reconstruction error (RMSE)
p < .001
p < .05
SIMPLE?REG?LUT(2,0)
CRF?REG?LUT(2,1)
ORACLE?REG?LUT(2,1)
SIMPLE?REG(2,0)
CRF?REG(2,1)
ORACLE?REG(2,1)
p < .001
SIMPLE?LUT(2,0)
CRF?LUT(2,1)
ORACLE?LUT(2,1)
p < .001
p < .001
SIMPLE?REG(2,0)
CRF?REG(2,1)
ORACLE?REG(2,1)
1
1.05
1.1
1.15
sensation reconstruction error (RMSE)
0.95 0.96 0.97 0.98 0.99
1
sensation reconstruction error (RMSE)
Figure 2: Results from Experiment 1 (left column) and Experiment 2 (right column). The top row
compares the reduction in prediction error for different types of decontamination. The bottom row
compares reduction in prediction error for different desequencer algorithms.
and McCallum (2007). The lookup tables used in the CRF - LUT and CRF - REG?LUT are the same
as those in the ORACLE - LUT and ORACLE - REG?LUT models. The CRF ? parameters are initialized
to be consistent with our notion of the potential as the negative squared error, using initialization
values obtained from the regression coefficients of the ORACLE - REG model. This initialization is
extremely useful because it places the parameters in easy reach of an effective local minimum. No
regularization is used on the CRF because of the small number of free parameters (7 for CRF - REG,
5 for CRF - LUT, and 14 for CRF - REG?LUT). Each model is used to determine the expected value of
St . We had initially hoped that a Viterbi decoding of the CRF might yield useful predictions, but the
expectation proved far superior, most likely because there is not a single path through the CRF that
is significantly better than others due to high level of noise in the data.
Beyond the primary set of models described above, we explored several other models. We tested
models in which the sensation and/or response values are log transformed, because sensory transduction introduces logarithmic compression. However, these models do not reliably improve decontamination. We examined higher-order regression models, i.e., m > 2. These models are helpful
for Experiment 1, but only because we inadvertently introduced structure into the sequences via the
constraint that each stimulus had to be presented once before it could be repeated. The consequence
of this constraint is that a series of small gaps predicted a larger gap on the next trial, and viceversa. One reason for conducting Experiment 2 was to eliminate this constraint. It also eliminated
the benefit of higher-order regression models. We also examined switched regression models whose
parameters were contingent on the current response. These models do not significantly outperform
the REG?LUT models.
4
Results
Figure 2 shows the root mean squared error (RMSE) between the ground-truth sensation and the
model-estimated sensation over the set of validation subjects for 100 different splits of the data. The
left and right columns present results for Experiments 1 and 2, respectively. In the top row of the
figure, we compare baseline performance with no decontamination?where the sensation prediction
is simply the participant?s actual response (pink bar)?against decompression alone (magenta bar),
debiasing alone (red bar), debiasing and decompression (purple bar), and the best full decontamination model, which includes debiasing, decompression, and desequencing (blue bar). The difference
between each pair of these results is highly reliable, indicating that bias, compression, and recency
effects all contribute to the contamination of human judgments.
7
The reduction of error due to debiasing is 14.8% and 11.1% in Experiments 1 and 2, respectively.
The further reduction in error when decompressing is incorporated is 4.8% and 3.4% in Experiments
1 and 2. Finally, the further reduction in error when desequencing is incorporated is 5.0% and 4.1%
in Experiments 1 and 2. We reiterate that bias and compression likely have at least part of their basis
in sequential dependencies. Indeed models like CRF - REG?LUT perform nearly as well even without
separate debiasing and decompression corrections.
The bottom row of Figure 2 examines the relative performance of the nine models defined by the
Cartesian product of model type (REG, LUT and REG?LUT) and inference type (SIMPLE, CRF,
and ORACLE). The joint model REG?LUT that exploits both the regularity of the regression model
and the flexibility of the lookup table clearly works better than either REG or LUT in isolation.
Comparing SIMPLE, which ignores the mutual constraints provided by the inferred sensations, to
to CRF, which exploits bidirectional temporal constraints, we see that the CRF inference produces
reliably better results in five of six cases, as evaluated by paired t-tests. We do not have a good
explanation for the advantage of SIMPLE - LUT over CRF - LUT in Experiment 1, although there are
some minor differences in how the lookup tables for the two models are constructed, and we are
investigating whether those differences might be responsible. We included the ORACLE models to
give us a sense of how much improvement we might potentially obtain, and clearly there is still
some potential gain as indicated by ORACLE - REG?LUT.
5
Discussion
Psychologists have long been struck by the relativity of human judgments and have noted that relativity limits how well individuals can communicate their internal sensations, impressions, and evaluations via rating scales. We?ve shown that decontamination techniques can improve the quality of
judgments, reducing error by over 20% Is a 20% reduction significant? In the Netflix competition, if
this improvement in the reliability of the available ratings translated to a comparable improvement
in the collaborative filtering predictions, it would have been of critical significance.
In this paper, we explored a fairly mundane domain: estimating the gap between pairs of dots on
a computer monitor. The advantage of starting our explorations in this domain is that it provided
us with ground truth data for training and evaluation of models. Will our conclusions about this
sensory domain generalize to more subjective and emotional domains such as movies and art? We
are currently designing a study in which we will collect liking judgments for paintings. Using the
models we developed for this study, we can obtain a decontamination of the ratings and identify
pairs of paintings where the participant?s ratings conflict with the decontaminated impressions. Via
a later session in which we ask participants for pairwise preferences, we can determine whether
the decontaminator or the raw ratings are more reliable. We have reason for optimism because all
evidence in the psychological literature suggests that corruption occurs in the mapping of internal
states to responses, and there?s no reason to suspect that the mapping is different for different types
of sensations. Indeed, it seems that if even responses to simple visual stimuli are contaminated,
responses to more complex stimuli with a more complex judgment task will be even more vulnerable.
One key limitation of the present work is that it examines unidimensional stimuli, and any interesting
domain will involve multidimensional stimuli, such as movies, that could be rated in many different
ways depending on the current focus of the evaluator. Anchoring likely determines relevant dimensions as well as the reference points along those dimensions, and it may require a separate analysis
to decontaminate this type of anchor.
On the positive side, the domain is ripe for further explorations, and our work suggests many directions for future development. For instance, one might better leverage the CRF?s ability to predict not
just the expected sensation, but the distribution over sensations. Alternatively, one might pay closer
attention to the details of psychological theory in the hope that it provides helpful constraints. One
such hint is the finding that systematic effects of sequences have been observed on response latencies
in judgment tasks (Lacouture, 1997); therefore, latencies may prove useful for decontamination.
A Wired Magazine article on the Netflix competition was entitled, ?This psychologist might outsmart
the math brains competing for the Netflix prize? (Ellenberg, March 2008). This provocative title
didn?t turn out to be true, but the title did suggest?consistent with the findings of our research?
that the math brains may do well to look inward at the mechanisms of their own brains.
8
Acknowledgments
This research was supported by NSF grants BCS-0339103, BCS-720375, and SBE-0518699. The
fourth author was supported by an NSF Graduate Student Fellowship. We thank Owen Lewis for
conducting initial investigations and discussions that allowed us to better understand the various
cognitive models, and Dr. Dan Crumly for the lifesaving advice on numerical optimization techniques.
References
DeCarlo, L. T., & Cross, D. V. (1990). Sequential effects in magnitude scaling: Models and theory.
Journal of Experimental Psychology: General, 119, 375?396.
Ellenberg, J. (March 2008). This psychologist might outsmart the math brains competing for
the netflix prize. Wired Magazine, 16. (http://www.wired.com/techbiz/media/magazine/1603/mf netflix?currentPage=all#)
Furnham, A. (1986). The robustness of the recency effect: Studies using legal evidence. Journal of
General Psychology, 113, 351?357.
Hogarth, R. M., & Einhorn, H. J. (1992). Order effects in belief updating: The belief adjustment
model. Cognitive Psychology, 24, 1?55.
Koren, Y. (August 2009). The bellkor solution to the netflix grand prize.
Lacouture, Y. (1997). Bow, range, and sequential effects in absolute identification: A response-time
analysis. Psychological Research, 60, 121-133.
Lafferty, J., McCallum, A., & Pereira, F. (2001). Conditional random fields: Probabilistic models
for segmenting and labeling sequence data. In International conference on machine learning (pp.
282?289). San Mateo, CA: Morgan Kaufmann.
Laming, D. R. J. (1984). The relativity of ?absolute? judgements. Journal of Mathematical and
Statistical Psychology, 37, 152?183.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity
for information processing. Psychological Review, 63, 81?97.
Mumma, G. H., & Wilson, S. B. (2006). Procedural debiasing of primacy/anchoring effects in
clinical-like judgments. Journal of Clinical Psychology, 51, 841?853.
Parducci, A. (1965). Category judgment: A range-frequency model. Psychological Review, 72,
407?418.
Parducci, A. (1968). The relativism of absolute judgment. Scientific American, 219, 84?90.
Petrov, A. A., & Anderson, J. R. (2005). The dynamics of scaling: A memory-based anchor model
of category rating and identification. Psychological Review, 112, 383?416.
Stewart, N., Brown, G. D. A., & Chater, N. (2005). Absolute identification by relative judgment.
Psychological Review, 112, 881?911.
Sutton, C., & McCallum, A. (2007). An introduction to conditional random fields for relational
learning. In L. Getoor & B. Taskar (Eds.), Introduction to statistical relational learning. Cambridge, MA: MIT Press.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science,
185, 1124?1131.
Wedell, D. H., Parducci, A., & Lane, M. (1990). Reducing the dependence of clinical judgment
on the immediate context: Effects of number of categories and type of anchors. Journal of
Personality and Social Psychology, 58, 319?329.
9
| 4162 |@word trial:36 middle:1 faculty:1 achievable:1 proportion:2 seems:2 compression:8 briefly:1 judgement:1 instruction:1 contrastive:1 paid:2 thereby:1 minus:1 reduction:7 initial:1 configuration:2 series:7 score:1 hereafter:1 rightmost:1 mumma:3 subjective:1 current:8 comparing:2 com:1 surprising:1 must:2 additive:2 numerical:1 unchanging:1 wanted:1 remove:2 plot:1 depict:1 alone:3 half:1 tone:1 mccallum:5 prize:3 colored:1 characterization:1 provides:2 contribute:1 math:3 preference:2 evaluator:1 five:1 mathematical:2 along:4 constructed:2 direct:1 pairing:1 qualitative:1 prove:1 dan:1 behavioral:1 introduce:1 g4:4 manner:3 falsely:1 pairwise:1 indeed:2 expected:4 roughly:1 abscissa:2 decontaminate:3 brain:5 anchoring:5 encouraging:1 actual:1 considering:1 increasing:1 provided:6 estimating:2 underlying:2 notation:1 xx:1 didn:1 medium:1 inward:1 developed:2 lindsey:1 unobserved:2 indiana:1 finding:3 temporal:2 mitigate:1 every:3 remember:1 multidimensional:1 exactly:3 wrong:1 control:1 grant:1 veridical:1 producing:1 segmenting:1 positive:2 t1:1 before:1 vertically:1 local:1 limit:3 consequence:3 switching:1 sutton:3 path:1 signed:1 might:7 plus:1 initialization:2 studied:1 examined:2 mateo:1 collect:2 suggests:2 g7:4 range:7 graduate:1 unique:1 responsible:1 acknowledgment:1 practice:6 block:11 implement:1 empirical:1 significantly:2 viceversa:1 refers:1 dime:1 suggest:1 cannot:1 recency:4 context:4 influence:4 pashler:1 www:1 map:1 demonstrated:1 center:2 straightforward:2 attention:2 regardless:1 starting:1 survey:3 focused:1 recovery:1 examines:2 spanned:2 century:1 classic:2 handle:1 variation:1 notion:1 analogous:1 suppose:1 colorado:2 magazine:3 exact:1 homogeneous:1 decompress:4 designing:1 parducci:7 complementarity:1 trick:1 particularly:1 updating:1 noninteger:1 observed:2 bottom:4 taskar:1 jury:1 ordering:2 contamination:1 mentioned:1 mozer:1 questionnaire:2 asked:5 dynamic:1 tversky:2 trained:2 bellkor:1 decompressing:2 debias:5 basis:2 translated:1 kahneman:2 necessitates:1 joint:1 various:2 distinct:5 describe:2 effective:1 labeling:1 quite:1 whose:2 lag:3 larger:4 valued:2 heuristic:1 compensates:1 ability:3 objectively:1 favor:1 g1:6 statistic:1 transform:1 noisy:1 sequence:13 advantage:2 took:1 propose:1 reconstruction:4 provocative:1 product:2 aligned:1 barking:2 relevant:2 bow:1 flexibility:1 achieve:1 competition:3 g9:5 regularity:2 r1:6 produce:3 categorization:5 wired:3 depending:1 develop:1 minor:2 recovering:1 predicted:1 implemented:1 come:2 indicate:2 judge:2 differ:1 direction:3 sensation:41 posit:1 involves:1 correct:1 culminating:1 opened:1 exploration:3 human:11 terribly:1 require:1 investigation:3 adjusted:2 exploring:1 extension:1 stretch:1 correction:1 around:1 considered:2 ground:9 exp:1 mapping:5 viterbi:1 predict:1 claim:1 matthew:1 g8:5 vary:1 early:4 smallest:1 omitted:1 adopt:2 purpose:1 estimation:3 label:2 currently:1 predominance:1 title:2 largest:1 grouped:1 repetition:2 vice:1 reflects:2 hope:2 mit:1 clearly:2 rather:2 avoid:1 varying:2 wilson:3 chater:2 focus:2 properly:1 longest:1 improvement:3 contrast:7 opted:1 baseline:3 sense:2 helpful:2 inference:6 dependent:1 typically:2 entire:1 eliminate:1 diminishing:1 initially:1 transformed:1 overall:1 among:1 classification:1 flexible:1 denoted:1 stp:4 development:1 art:1 fairly:1 mutual:3 field:5 equal:2 once:3 having:1 aware:2 eliminated:1 construct:1 identical:2 exerted:1 jones:2 unsupervised:1 filling:2 nearly:1 look:1 future:1 report:1 contaminated:2 stimulus:40 fundamentally:2 few:1 others:1 simplify:1 randomly:1 hint:1 t2:3 lawful:1 resulted:1 ve:2 individual:27 consisting:1 decarlo:4 attempt:1 detection:1 highly:2 evaluation:6 laborious:1 introduces:1 extreme:2 yielding:1 light:1 chain:2 closer:1 worker:1 partial:1 experience:2 explosion:1 necessary:1 modest:1 indexed:1 desequencing:3 filled:1 initialized:1 theoretical:1 minimal:1 psychological:13 instance:1 column:6 earlier:1 modeling:1 asking:2 stewart:4 entry:3 usefulness:1 decontaminating:2 conducted:3 too:1 reported:1 dependency:14 st:50 grand:1 international:1 systematic:4 told:3 laming:2 decoding:1 infomax:1 michael:2 together:1 continuously:2 probabilistic:1 einhorn:2 squared:2 satisfied:1 successively:1 possibly:1 dr:1 cognitive:4 external:1 american:1 suggesting:1 potential:8 lookup:15 student:1 includes:1 coefficient:3 matter:1 explicitly:1 caused:1 reiterate:1 later:5 root:1 characterizes:1 red:1 netflix:9 recover:1 participant:18 slope:2 rmse:5 collaborative:1 square:2 purple:1 accuracy:2 kaufmann:1 who:1 conducting:3 miller:2 judgment:34 yield:7 correspond:2 painting:2 identify:1 generalize:1 identification:9 raw:1 researcher:1 corruption:2 history:2 explain:3 simultaneous:1 influenced:2 tended:1 reach:1 ed:1 against:2 petrov:4 competitor:1 hotel:1 frequency:3 nonetheless:1 pp:1 associated:1 gain:1 proved:1 ask:1 knowledge:2 subtle:1 carefully:2 back:3 bidirectional:1 higher:7 supervised:1 methodology:2 response:57 specify:1 reflected:1 inspire:1 formulation:2 arranged:1 though:2 evaluated:1 anderson:4 just:1 horizontal:1 web:2 nonlinear:4 assessment:2 quality:2 perhaps:1 indicated:3 scientific:1 building:1 matt:1 effect:20 brown:2 requiring:1 consisted:1 true:1 regularization:1 excluded:1 jargon:1 laboratory:1 indistinguishable:1 during:1 width:2 noted:1 harold:1 decontamination:26 impression:4 complete:3 crf:39 interpreting:1 hogarth:2 reasoning:1 ellenberg:3 began:1 common:2 superior:1 specialized:1 preceded:2 debiasing:9 endpoint:1 discussed:1 interpretation:1 occurred:2 analog:1 expressing:1 significant:2 refer:3 versa:1 cambridge:1 framed:1 automatic:1 fk:2 session:3 similarly:1 decompression:8 contest:1 had:4 dot:6 reliability:1 moving:1 similarity:1 depiction:1 etc:1 own:1 recent:6 perspective:1 apart:1 discard:1 relativity:6 incapable:2 entitled:1 minimum:1 fortunately:1 additional:1 contingent:1 morgan:1 determine:4 shortest:1 multiple:4 full:4 liking:1 bcs:2 replete:1 usability:1 clinical:4 cross:4 long:3 offer:1 retrieval:1 paired:1 controlled:1 prediction:6 involving:2 regression:19 expectation:3 normalization:2 represent:1 achieved:2 cell:1 folding:1 remarkably:1 fellowship:1 underestimated:1 evil:1 biased:1 extra:1 subject:1 suspect:1 wedell:2 member:1 ample:1 flow:2 lafferty:2 extracting:1 leverage:1 split:2 enough:1 easy:2 isolation:1 psychology:12 associating:1 competing:2 unidimensional:2 grading:1 translates:1 whether:3 motivated:1 regt:3 handled:1 six:1 optimism:1 moral:1 nine:1 action:2 useful:4 latency:2 detailed:1 involve:2 amount:1 ten:4 category:4 reduced:1 http:1 specifies:1 outperform:1 nsf:2 shifted:1 estimated:3 per:1 blue:1 discrete:2 group:2 key:3 terminology:1 procedural:1 monitor:3 discreteness:1 utilize:2 backward:1 vast:1 graph:3 you:3 respond:1 striking:1 communicate:1 fourth:1 place:1 uncertainty:1 looser:1 summarizes:1 scaling:2 mundane:1 comparable:1 bit:1 capturing:1 bound:1 pay:2 followed:2 koren:2 correspondence:2 quadratic:1 oracle:21 nontrivial:1 occur:3 constraint:10 lane:2 extremely:2 poisoning:2 relatively:1 developing:1 according:2 march:3 poor:4 combination:2 wilder:1 pink:1 smaller:2 across:1 remain:1 g3:4 making:3 stealing:1 s1:3 psychologist:4 explained:1 indexing:1 g6:4 legal:2 equation:3 turn:2 mechanism:1 serf:1 autoregression:1 available:2 away:1 appropriate:1 distinguished:1 primacy:2 alternative:1 robustness:1 shortly:1 altogether:1 magical:1 personality:1 assumes:1 top:4 ensure:1 emotional:2 exploit:2 especially:1 graded:2 move:3 added:1 quantity:1 occurs:2 strategy:1 primary:1 rt:20 dependence:2 interacts:1 g5:4 loudness:2 pain:1 affinity:1 gradient:1 distance:1 separate:2 thank:1 capacity:2 majority:1 gun:1 participate:1 seven:1 collected:1 toward:3 reason:4 assuming:1 length:4 index:1 relationship:3 robert:1 statement:2 potentially:1 negative:2 lagged:1 design:1 reliably:3 unknown:1 perform:2 upper:1 st2:2 descent:1 immediate:1 relational:2 incorporated:4 ucsd:1 stack:1 varied:2 arbitrary:1 august:2 drift:3 inferred:2 rating:25 introduced:1 dog:2 cast:2 struck:2 pair:7 required:3 specified:1 conflict:1 distinction:1 address:1 beyond:2 bar:7 below:2 pattern:2 built:1 including:1 memory:2 explanation:2 reliable:2 belief:2 critical:1 getoor:1 natural:1 rely:2 hybrid:4 predicting:1 indicator:1 residual:2 representing:1 improve:3 movie:3 rated:1 dating:1 isn:1 text:1 review:4 literature:4 precedent:1 relative:4 determining:2 interesting:1 limitation:1 filtering:1 analogy:1 sbe:1 validation:1 switched:1 awareness:1 ripe:1 consistent:4 article:1 principle:1 storing:1 testifying:1 row:7 supported:3 surprisingly:1 keeping:1 english:1 free:3 tease:1 bias:7 allow:1 understand:2 side:1 characterizing:1 absolute:14 benefit:1 feedback:5 dimension:5 rich:1 unaware:1 sensory:2 forward:1 collection:2 made:2 ignores:1 author:1 san:1 far:1 lut:48 social:1 implicitly:1 evoke:1 investigating:1 anchor:8 assumed:1 alternatively:1 latent:1 table:17 expanding:1 ca:1 obtaining:1 inventory:1 complex:4 constructing:1 domain:8 did:2 significance:1 main:1 spread:1 motivation:2 noise:1 arise:1 profile:1 allowed:2 repeated:1 convey:1 advice:1 screen:3 transduction:2 slow:1 assimilation:6 inferring:3 position:1 explicit:1 wish:1 pereira:2 third:3 rtp:3 removing:1 minute:2 magenta:1 bad:1 explored:2 evidence:5 normalizing:1 sequential:21 g10:4 adding:1 magnitude:5 hoped:1 portal:2 conditioned:1 cartesian:2 gap:21 mf:1 led:1 logarithmic:1 simply:5 explore:4 likely:7 visual:1 adjustment:1 g2:6 scalar:1 vulnerable:1 truth:8 determines:2 lewis:1 ma:1 conditional:5 towel:1 sized:1 furnham:2 presentation:1 formulated:1 owen:1 change:1 included:2 determined:4 folded:1 reducing:2 total:1 discriminate:1 experimental:3 inadvertently:1 meaningful:1 indicating:3 exception:1 subblocks:1 formally:1 internal:8 people:1 assessed:1 dept:4 reg:43 tested:2 phenomenon:2 |
3,493 | 4,163 | Learning Multiple Tasks using Manifold
Regularization
Arvind Agarwal?
Hal Daum?e III?
Department of Computer Science
University of Maryland
College Park, MD 20740
[email protected]
[email protected]
Samuel Gerber
Scientific Computing and Imaging Institute
University of Utah
Salt Lake City, Utah 84112
[email protected]
Abstract
We present a novel method for multitask learning (MTL) based on manifold regularization: assume that all task parameters lie on a manifold. This is the generalization of a common assumption made in the existing literature: task parameters
share a common linear subspace. One proposed method uses the projection distance from the manifold to regularize the task parameters. The manifold structure
and the task parameters are learned using an alternating optimization framework.
When the manifold structure is fixed, our method decomposes across tasks which
can be learnt independently. An approximation of the manifold regularization
scheme is presented that preserves the convexity of the single task learning problem, and makes the proposed MTL framework efficient and easy to implement.
We show the efficacy of our method on several datasets.
1
Introduction
Recently, it has been shown that learning multiple tasks together helps learning [8, 19, 9] when the
tasks are related, and one is able to use an appropriate notion of task relatedness. There are many
ways by which one can enforce the relatedness of the tasks. One way to do so is to assume that two
tasks are related if their parameters are ?close?. This notion of relatedness is usually incorporated in
the form of a regularizer [4, 16, 13] or a prior [15, 22, 21].
In this work we present a novel approach for multitask learning (MTL) that considers a notion of
relatedness based on ideas from manifold regularization1 . Our approach is based on the assumption
that the parameters of related tasks can not vary arbitrarily but rather lie on a low dimensional manifold. A similar idea underlies the standard manifold learning problems: the data does not change
arbitrarily, but instead follows a manifold structure. Our assumption is also a generalization of the
assumption made in [1] which assumes that all tasks share a linear subspace, and a learning framework consists of learning this linear subspace and task parameters simultaneously. We remove the
linear constraint from this problem, and assume that the tasks instead share a non-linear subspace.
In our proposed approach we learn the task parameters and the task-manifold alternatively, learning
one while keeping the other fixed, similar to [4]. First, we learn all task parameters using a single
task learning (STL) method, and then use these task parameters to learn the initial task manifold. The
task-manifold is then used to relearn the task parameters using manifold regularization. Learning of
manifold and task parameters is repeated until convergence. We emphasize that when we learn the
task parameters (keeping the manifold structure fixed), the MTL framework decomposes across the
?
This work was done at School of Computing, University of Utah, Salt Lake City, Utah
It is not to be confused with the manifold regularization presented in [7]. We use the projection distance
for regularization while Belkin et.al. use the graph structure (graph Laplacian).
1
1
tasks, which can be learned independently using standard method such as SVMs. Note that unlike
most manifold learning algorithms, our framework learns an explicit representation of the manifold
and naturally extends to new tasks. Whenever a new task arrives, one can simply use the existing
manifold to learn the parameters of the new task. For a new task, our MTL model is very efficient
as it does not require relearning all tasks.
As shown later in the examples, our method is simple, and can be implemented with only a small
change to the existing STL algorithms. Given a black box for manifold learning, STL algorithms
can be adapted to the proposed MTL setting. To make the proposed framework even simpler, we
provide an approximation which preserves the convexity of the STL problem. We emphasize that
this approximation works very well in practice. All the experimental results used this approximation.
2
Related Work
In MTL, task relatedness is a fundamental question and models differ in the ways they answer
this question. Like our method, most of the existing methods first assume a structure that defines
the task relatedness, and then incorporate this structure in the MTL framework in the form of a
regularizer [4, 16, 13].
One plausible approach is to assume that all task parameters lie in a subspace [1]. The tasks are
learned by forcing the parameters to lie in a common linear subspace therefore exploiting the assumed relatedness in the model. Argyriou et.al. [4] later generalized this work by using a function
F to model the shared structure. In this work, the relatedness structure is forced by applying a
function F on a covariance matrix D which yields a regularization of the form tr(F (D)W W T ) on
the parameters W . Here, the function F can model different kind of relatedness structures among
tasks including the linear subspace structure [1]. Given a function F , this framework learns both,
the relatedness matrix D and the task parameters W . One of the limitations of this approach is
the dependency on F which has to be provided externally. In an informal way, F introduces the
non-linearity and it is not clear as what the right choice of F is. Our framework generalizes the linear framework by introducing the nonlinearity through the manifold structure learned automatically
from the data, and thus avoids the need of any external function. Argyriou et. al. extend their work
[4] in [2, 3] where non-linearity is introduced by considering a kernel function on the input data,
and then learning the linear subspace in the Hilbert space. This method in spirit is very similar to
our method except that we learn an explicit manifold therefore our method is naturally extensible to
new tasks.
Another work that models the task relatedness in the form of proximity of the parameters is [16]
which assumes that task parameters wt for each task is close to some common task w0 with some
variance vt . These vt and w0 are learned by minimizing the Euclidean norm which is again equivalent to working in the linear space. This idea is later generalized by [13], where tasks are clustered,
and regularized with respect to the cluster they belong to. The task parameters are learned under this
cluster assumption by minimizing a combination of different penalty functions.
There is another line of work [10], where task relatedness is modeled in term of a matrix B which
needs to be provided externally. There is also a large body of work on multitask learning that find
the shared structure in the tasks using Bayesian inference [23, 24, 9], which in spirit, is similar to
the above approaches, but done in a Bayesian way. It is to be noted that all of the above methods
either work in a linear setting or require external function/matrix to enforce the nonlinearity. In our
method, we work in the non-linear setting without using any external function.
3
Multitask Learning using Manifold
In this section we describe the proposed MTL framework. As mentioned earlier, our framework
assumes that the tasks parameters lie on a manifold which is a step further to the assumption made
in [1] i.e., the task parameters lie on a linear subspace or share a common set of features. Similar
to the linear subspace algorithm [1] that learns the task parameters (and the shared subspace) by
regularizing the STL framework with the orthogonal projections of the task parameters onto the
subspace, we propose to learn the task parameters (and non-linear subspace i.e., task-manifold) by
2
regularizing the STL with the projection distance of the task parameters from this task-manifold (see
Figure 1).
We begin with some notations. Let T be the total number of tasks, and for each task t, let
Xt = {x1 , . . . xnt } be the set of examples and Yt = {y1 , . . . ynt } be the corresponding labels.
Each example xi ? Rd is a d dimensional vector, and yi is a label; yi ? {+1, ?1} in case of a
classification problem, and a real value yi ? R in case of regression problem. nt is the number
of examples in task t. For the simplicity of the notations, we assume that all tasks have the same
number of examples i.e. n1 = . . . = nT = n, though in practice they may vary. Now for each task
t, let ?t be the parameter vector, referred as the task parameter.
Given example-label pairs set (Xt , Yt ) for task t, a learning
w
problem would be to find a function ft that for any future
example x, predicts the correct value of y i.e. y = ft (x). A
standard way to learn this function is to minimize the loss between the value predicted by the function and the true value.
Let L be such a loss function. Let k be a kernel defined on the
input examples k : Rd ? Rd ? R and Hk be the reproducing kernel Hilbert space (RKHS) associated with the kernel
w?
k. Restricting ft to the functions in the RKHS and denoting
it by f (x, ?t ) = h?t , ?(x)i, single task learning solves the
following optimization problem:
X
2
?t? = arg min
L(f (x; ?t ), y) + ? ||ft ||Hk , (1) Figure 1: Projection of the estimated
?t
x?Xt
here ? is a regularization parameter. Note that the kernel is
assumed to be common for all tasks hence does not have the
subscript t. This is equivalent to saying that all tasks belong
to the same RKHS.
parameters w of the task in hand on the
manifold learned from all tasks parameters. w? is the optimal parameter.
Now one can extend the above STL framework to the multitask setting. In MTL, tasks are related,
this notion of relatedness is incorporated through a regularizer. Let u be such regularizer, then MTL
solves:
(?1? , . . . ?T? ) = arg min
T ? X
X
(?1 ,...?T ) t=1
L(f (x; ?t ), y) + ? ||ft ||2Hk
?
+ ?u(?1 . . . ?T ),
(2)
x?Xt
where ? is a trade off parameter similar to ? that trades off the amount of MTL regularization. As
mentioned in Section 2, there are many ways in which this regularizer can be implemented. For
example, for the assumption that the task parameters are close to a common task ?0 , regularizer
would just be k?t ? ?0 k2 . In our approach, we split the regularizer u(?1 , . . . , ?T ) into T different
regularizers u(?t , M) such that u(?t , M) regularizes the parameter of task t while considering the
effect of other tasks through the manifold M. The optimization problem under such regularizer can
be written as:
(?1? , . . . ?T? ) = arg min
T ? X
X
(?1 ,...?T ),M t=1
?
L(f (x; ?t ), y) + ? ||ft ||2Hk + ?u(?t , M) .
(3)
x?Xt
Note that optimization is now performed over both task parameters and the manifold. If manifold
structure M is fixed then the above optimization problem decomposes into T independent optimization problems. In our approach, the regularizer depends on the structure of the manifold constructed
from the task parameters {?1 , . . . ?T }. Let M be such manifold, and PM (?t ) be the projection distance of ?t from the manifold. Now one can use this projection distance as a regularizer u(?t , M)
in the cost function since all task parameters are assumed to lie on the task manifold M. The cost
function is now given by:
CP =
T ? X
X
t=1
?
L(f (x; ?t ), y) + ? ||ft ||2Hk + ?PM (?t ) .
(4)
x?Xt
Since the manifold structure is not known, the cost function (4) needs to be optimized simultaneously for the task parameters (?1 . . . ?T ) and for the task-manifold M. Optimizing for ? and M
jointly is a hard optimization problem, therefore we resort to the alternating optimization. We first
3
fix the task parameters and learn the manifold. Next, we fix the manifold M, and learn the task
parameters by minimizing (4). In order to minimize (4) for the task parameters, we need an expression for PM i.e. an expression for computing the projection distance of task parameters from the
manifold. More precisely, we only need the gradient of PM not the function itself since we will
solve this problem using gradient descent.
3.1
Manifold Regularization
Our approach relies heavily on the capability to learn a manifold, and to be able to compute the
gradient of the projection distances onto the manifold. Much recent work in manifold learning
focused on uncovering low dimensional representation [18, 6, 17, 20] of the data. These approaches
do not provide the tools crucial to this work i.e., the gradient of the projection distance. Recent
work [11] addresses this issues and proposes a manifold learning algorithm, based on the idea of
principal surfaces [12]. It explicitly represents the manifold in the ambient space as a parametric
surface which can be used to compute the projection distance and its gradient.
For the sake of completeness, we briefly describe this method (for details refer [11]). The method
is based on minimizing the expected reconstruction error E[g(h(?)) ? ?] of the task parameter ?
onto the manifold M. Here h is the mapping from the manifold to the lower dimensional Euclidean
space and g is the mapping from the lower dimensional Euclidean space to the manifold. Thus, the
composition g ? h maps a point belonging to manifold to the manifold, using the mapping to the
Euclidean space as an intermediate step. Note that ? and g(h(?)) are usually not the same. These
mappings g and h can be formulated in terms of kernel regressions over the data points:
h(?) =
T
X
j=1
K? (? ? ?j )
zj
PT
l=1 K? (? ? ?l )
(5)
with K? a kernel function and zj a set of parameters to be estimated in the manifold learning
process. Similarly
g(r) =
T
X
j=1
Kr (r ? h(?j ))
?j
PT
l=1 Kr (r ? h(?l ))
(6)
again with Kr a kernel function.
Note that in the limit, the kernel regression converges to the conditional expectation g(r) =
E[(?1 , . . . , ?T )|r] where expectation is taken with respect to probability distribution p(?), parameters are assumed to be sampled from. If h is an orthogonal projection, this yields a principal
surface [12], i.e informally g passes through the middle of the density. In [11] it is shown that in
the limit, as the number of samples to learn from increases, h indeed yields an orthogonal projection onto g. Under this orthogonal projection, the estimation of the parameters zi , i.e. the manifold learning, can be done through gradient descent on the sample mean of the projection distance
PT
1
i=1 g(h(?i )) ? ?i using a global manifold learning approach for initialization. Once h is estiT
mated, the projection distance is immediate by
PM = k? ? g(h(?))k2 = k? ? ?M k2
(7)
For the optimization of (4) we need the gradient of the projection distance which is
dPM (?)
dg(r)
dh(?)
= 2(g(h(?)) ? ?)
|r=h(?)
.
d?
dr
d?
(8)
The projection distance for a single task parameters is O(n) due to the definition of h and g as
dh(?)
kernel regressions which show up in the projection distance gradient in dg(r)
dr |r=h(?) and
d? .
This is fairly expensive therefore we propose an approximation, justified by the convergence to an
orthogonal projection of h, to the exact projection gradient. For an orthogonal projection the term
dg(r)
dh(?)
dh(?)
dg(r)
dr |r=h(?) d? vanishes ( d? is orthogonal to the tangent plane dr |r=h(?) of the projected
point) and the gradient simplifies to
dPM (?)
= 2(g(h(?)) ? ?),
d?
(9)
which is exactly the gradient of (7) assuming that the projection of ? onto the manifold is fixed. A
further advantage of this approximation , besides a computational speedup, is that no non-convexities
are introduced due to the regularization.
4
Algorithm 1 MTL using Manifold Regularization
Input: {xi , yi }ni=1 for t = 1 . . . T .
Output: ?1 , . . . ?T .
Initialize: Learn ?1 , . . . ?T independently.
Learn the task-manifold using ?1 , . . . ?T .
while it < numIter do
for t = 1 to T do
Learn ?t using (4) with (7) or (10).
end for
Relearn the task-manifold using ?1 , . . . ?T .
end while
The proposed manifold regularization approximation allows to use any STL method without much
change in the optimization of the STL problem. The proposed method for MTL pipelines manifold
learning with the STL. Using (7) one can write the (4) as:
CP =
T ? X
X
t=1
??
??2 ?
??
??
L(f (x; ?t ), y) + ? ||?t ||2 + ? ???t ? ??tM ??
(10)
x?Xt
here ??tM is the fixed projection of the ? on the manifold. Note that in the proposed approximation
of the above expression, ??tM is fixed while computing the gradient i.e., one does not have to worry
about moving the projection of the point on the manifold during the gradient step. Although in
the following example, we will solve (10) for linear kernel, extension for the non-linear kernels is
straightforward under the proposed approximation. This approximation allows one to treat the manifold regularizer similar to the RKHS regularizer k?t k2 and solve the generalized learning problem
(4) with non-linear kernels. Note that k?t ? ??tM k2 is a monotonic function of ? so it does not violate
the representer theorem.
3.2
Example: Linear Regression
In this section, we solve the optimization problem (4) for the linear regression model. This is the
model we have used in all of our experiments. In the learning framework (4), the loss function is
L(x, y, wt ) = (y ? hwt , xi)2 with linear kernel k(x, y) = hx, yi. We have changed the notations for
parameters from ? to w to differentiate the linear regression from the general framework. The cost
function for linear regression can now be written as:
CP =
T ? X
X
t=1
(y ? hwt , xi)2 +
x?Xt
?
?
||wt ||2 + ?PM (wt )
2
(11)
This cost function may be convex or non-convex depending upon the manifold terms PM (wt ). The
first two terms are convex. If one uses the approximation (10), this problem becomes convex and
has the form similar to STL. The solution under this approximation is given by:
`
??1 `
?
wt = (? + ?)I + hXt , XtT i
Xt YtT + ? w
?tM
(12)
where I is a d ? d identity matrix, Xt is a d ? n example matrix, and Yt is a row vector of
corresponding labels. w
?tM is the orthogonal projection of w on the manifold.
3.3
Algorithm Description
The algorithm for MTL with manifold regularization is straightforward and shown in Algorithm 1.
The algorithm begins with the STL setting i.e., each task parameter is learned independently. These
learned task parameters are then used to estimate the task-manifold. Keeping the manifold structure
fixed, we relearn all task parameters using manifold regularization. Equation (9) is used to compute
the gradient of the projection distance used in relearning the parameters. This step gives us the
explicit representation of the projection in the case of a linear kernel while a set of weights in the
case of a non-linear kernel. Current code available for computing the projection [11] only handles
points in the Euclidean space (RKHS with linear kernel), not in a general RKHS, though in theory,
it is possible to extend the current code to general RKHS. Once the parameters for all tasks are
learned, the manifold is re-estimated based on the updated task parameters. This process is repeated
for a fixed number of iterations (in our experiments we use 5 iterations).
5
4
Experiments
In this section, we consider the regression task and show the experimental results of our method. We
evaluate our method on both synthetic and real datasets.
4.1
Synthetic Dataset
First, we evaluate our method on a synthetic data. This data is generated from the task parameters
sampled from a known manifold (swiss roll). The data is generated by first sampling the points
from the 3-dimensional swiss roll, and then using these points as the task parameters to generate the
examples using the linear regression model. We sample 100 tasks, and for each task we generate
2 examples. The number of examples per task is kept low for two reasons. First, the task at hand
(this is linear) is a relatively easy task and more number of examples give a nearly perfect regression
model with the STL method itself, leaving almost no room for improvement. Second, MTL in the
real world makes sense only when the number of examples per task is low. In all of our experiments,
we compare our approach with the approach presented in [4] for two reasons. First, this is the
approach most closely related to our approach (this makes linear assumption while we make the
non-linear assumption), and second, code is available online2 .
In all our experiments we report the root mean square error (RMSE) [4]. For a set of 100 tasks,
taskwise results for the synthetic data is shown in Figure 2(a). In this figure, the x-axis represents
the RMSE of the STL model while the y-axis is the RMSE of the MTL model. Figure 2(a) shows
the performance of the MTL model relative to the STL model. Each point (x, y) in the figure
represents the (STL,MTL) pair. Blue dots denote the MTL performance of our method while green
crosses denote the performance of the baseline method [4]. The red line denote the points where
MTL and STL performed equally. Any point above the red line shows that the RMSE of MTL is
higher (bad case) while points below denote that RMSE of MTL is lower (good case). It is clear
from Figure 2(a) that our method is able to use the manifold information therefore outperform both
STL and MTL-baseline methods. We improve the performance of almost all tasks with respect to
STL, while MTL-baseline improves the performance of only few tasks. Note the mean performance
improvement (reduction in RMSE i.e. RMSE of (STL-MTL)) of all tasks in our method and in the
baseline-MTL. We get an improvement of +0.0131 while baseline has the negative performance
improvement of ?0.0204. For the statistical significance, reported numbers are averaged over 10
runs. Hyperparameters of both models (baseline and ours (? and ?)) were tuned on a small dataset
chosen randomly.
4.2
Real Regression Dataset
We now evaluate our method on two real datasets school dataset and computer survey dataset [14],
the same datasets as used in the baseline model [4]. Moreover they have also been used in previous
MTL studies, for example, school dataset in [5, 10] and computer dataset in [14].
Computer This dataset is a survey of 190 students who rated the likelihood of purchasing one of
20 different personal computers. Here students correspond to the tasks and computers correspond
to the examples. Each student rated all of the 20 computers on a scale of 0-10, therefore giving 20
labeled examples per task. Each computer (input example) is represented by 13 different computer
characteristics (RAM, cache, CPU, price etc.). Training and test sets were obtained by splitting the
dataset into 75% and 25%, thus giving 15 examples for training and 5 examples for testing.
School This dataset 3 is from the Inner London Education Authority and consists of the examination scores of 15362 students from 139 schools in London. Here, each school corresponds to a
task, thus a total of 139 tasks. The input consists of the year of the examination, 4 school-specific
and 3 student-specific attributes. Following [5, 4], each categorical feature is replaced with binary
2
For a fair comparison, we use the code provided by the author, available at http://ttic.uchicago.
edu/?argyriou/code/mtl_feat/mtl_feat.tar.
3
Available
at
http://www.cmm.bristol.ac.uk/learning-training/
multilevel-m-support/datasets.shtml
6
n=2, T=100, AvgManifold=0.0131
AvgBaseline=?0.0204
2.6
2.5
STL
2.4
MTL?Manifold
2.3
0.2
Avg RMSE
RMSE (MTL)
0.25
0.15
MTL?Baseline
2.2
2.1
2
1.9
0.1
1.8
0.05
1.7
0.05
0.1
0.15
0.2
1.6
0.25
RMSE (STL)
0
50
100
150
200
250
Number of examples per task
(a)
(b)
Figure 2: Taskwise performance on the synthetic dataset. The red line marks where STL and MTL
perform equally. Any points above it represent the tasks whose RMSE increased through the MTL
framework while those below showed performance improvement (reduced RMSE). Green crosses
are the baseline method and blue dots are the manifold method. Avg{Manifold,Baseline} in the title is
the mean performance improvement of all tasks over STL. (b) Average RMSE vs number of examples
for school dataset
n=15, T=190, AvgManifold=0.2302
AvgBaseline=?0.9121
n=10, T=139, AvgManifold=0.1458
AvgBaseline=0.1563
4
2.8
2.6
3.5
RMSE (MTL)
RMSE (MTL)
2.4
2.2
2
1.8
1.6
3
2.5
2
1.4
1.5
1.2
1
1
1
1.5
2
1
2.5
1.5
2
2.5
3
3.5
4
RMSE (STL)
RMSE (STL)
(a)
(b)
Figure 3: Taskwise performance on (a) computer and (b) school datasets.
features, giving us a total of 26 features. We again split the dataset into 75% training and 25%
testing.
Similar to the synthetic dataset, hyperparameters of the baseline method and manifold method (?
and ?) were tuned on a small validation dataset picked randomly from the training set. In the
experiments, whenever we are required to use fewer number of examples, examples were chosen
randomly. In such experiments, reported numbers were averaged over 10 runs for the statistical
significance. Note that the fewer the examples, the higher the variance because of randomness. In
order to see if learning tasks simultaneously helps, we did not consider the zero value while tuning
the hyperparameters of MTL to avoid the reduction of MTL method to STL ones.
Figure 3(a) and Figure 3(b) shows the taskwise performance of the computer and school datasets
respectively. We note that for the computer dataset, we perform significantly better than both STL
and the baseline methods. The baseline method performs worse than the STL method, therefore
giving a negative average performance improvement of ?0.9121. We believe that this is because
the tasks are related non-linearly. For the school dataset, we perform better than both STL and the
baseline method though relative performance improvement is not as significant as in the computer
dataset. On the school dataset, the baseline method has a mixed behavior relative to the STL method,
performing good on some tasks while performing worse on others. In both of these datasets, we
observe that our method does not cause the negative transfer i.e. causing a task to perform worse
than the STL. Although we have not used anything in our problem formulation to avoid negative
transfer, this observation is interesting. Note that almost all of the existing MTL methods suffer
from the negative transfer phenomena. We emphasize that the baseline method has two parameters
7
that are very important, the regularization parameter and the P . In our experiments we found that the
baseline method is very sensitive to both of these parameters. In order to have a fair and competitive
comparison, we used the best value of these parameters, tuned on a small validation dataset picked
randomly from the training set.
STL
3
MTL?Manifold
2.3
MTL?Baseline
2.5
STL
2.2
Avg RMSE
Avg RMSE
MTL?Manifold
2.1
2
1.5
MTL?Baseline
2
1.9
1.8
1
1.7
0.5
1.6
0
50
100
150
200
0
50
100
150
Number of tasks
Number of tasks
(a)
(b)
Figure 4: RMSE Vs number of tasks for (a) computer dataset (b) school dataset
Now we show the performance variation with respect to the number of training examples. Figure 2(b) shows the relative performance of the STL, MTL-baseline and MTL-Manifold for the
school dataset. We outperform STL method significantly while we perform comparative to the
baseline. Note that when the number of examples is relatively low, the baseline method outperforms our method because we do not have enough examples to estimate the parameters of the task
which is used for the manifold construction. But as we increase the number of examples, we get
better estimate of the parameters, hence better manifold regularization. For n > 100 we outperform
the baseline method by a small amount. Variation of the performance with n is not shown for the
computer dataset because computer dataset has only 20 examples per task.
Performance variation with respect to the number of tasks for school and computer datasets is shown
in Figure 4. We outperform STL method and the baseline method for the computer dataset while
perform better/equal on the school dataset. These two plots indicate how the tasks are related in these
two datasets. It suggests that tasks in school datasets are related linearly (Manifold and baseline
methods have the same performance 4 ) while tasks in the computer dataset are related non-linearly,
which is why baseline method performs poor compared to the STL method. Both datasets exhibit the
different behavior as we increase the number of tasks, though behavior relative to the STL method
remains constant. This suggests that after a certain number of tasks, performance is not affected by
adding more tasks. This is especially true for the computer dataset since it only has 13 features and
only a few tasks are required to learn the task relatedness structure.
In summary, our method improves the performance over STL in all of these datasets (no negative
transfer), while baseline method performs comparatively on the school dataset and performs worse
on the computer dataset.
5
Conclusion
We have presented a novel method for multitask learning based on a natural and intuitive assumption
about the task relatedness. We have used the manifold assumption to enforce the task relatedness
which is a generalization of the previous notions of relatedness. Unlike many other previous approaches, our method does not require any other external information e.g. function/matrix other
than the manifold assumption. We have performed experiments on synthetic and real datasets, and
compared our results with the state-of-the-art method. We have shown that we outperform the baseline method in nearly all cases. We emphasize that unlike the baseline method, we improve over
single task learning in almost all cases and do not encounter the negative transfer.
4
In the ideal case, the non-linear method should be able to discover the linear structure. But in practice,
they might differ, especially when there are fewer number of tasks. This is the reason we perform equal on the
school dataset when the number of tasks is high.
8
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS ?06, 2006.
[2] A. Argyriou, T. Evgeniou, M. Pontil, A. Argyriou, T. Evgeniou, and M. Pontil. Convex multitask feature learning. In Machine Learning. press, 2007.
[3] A. Argyriou, C. A. Micchelli, and M. Pontil. When is there a representer theorem? vector
versus matrix regularizers. J. Mach. Learn. Res., 10:2507?2529, 2009.
[4] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for
multi-task structure learning. In NIPS ?08. 2008.
[5] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. JMLR,
4:2003, 2003.
[6] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373?1396, 2002.
[7] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for
learning from labeled and unlabeled examples. J. Mach. Learn. Res., 7:2399?2434, 2006.
[8] R. Caruana. Multitask learning. In Machine Learning, pages 41?75, 1997.
[9] H. Daum?e III. Bayesian multitask learning with latent hierarchies. In Conference on Uncertainty in Artificial Intelligence ?09, Montreal, Canada, 2009.
[10] T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods.
JMLR, 6:615?637, 2005.
[11] S. Gerber, T. Tasdizen, and R. Whitaker. Dimensionality reduction and principal surfaces via
kernel map manifolds. In In Proceedings of the 2009 International Conference on Computer
Vison (ICCV), 2009.
[12] T. Hastie. Principal curves and surfaces. PhD thesis, Stanford University, 1984.
[13] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: A convex formulation. In NIPS
?08, 2008.
[14] P. J. Lenk, W. S. DeSarbo, P. E. Green, and M. R. Young. Hierarchical bayes conjoint analysis: Recovery of partworth heterogeneity from reduced experimental designs. MARKETING
SCIENCE, 1996.
[15] Q. Liu, X. Liao, H. L. Carin, J. R. Stack, and L. Carin. Semisupervised multitask learning.
IEEE 2009, 2009.
[16] C. A. Micchelli and M. Pontil. Regularized multi-task learning. In KDD 2004, pages 109?117,
2004.
[17] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290(5500):2323?2326, December 2000.
[18] J. B. Tenenbaum, V. Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, December 2000.
[19] S. Thrun and L. Pratt, editors. Learning to learn. Kluwer Academic Publishers, Norwell, MA,
USA, 1998.
[20] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality
reduction. In In ICML 2004, pages 839?846. ACM Press, 2004.
[21] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with
dirichlet process priors. J. Mach. Learn. Res., 8:35?63, 2007.
[22] K. Yu, V. Tresp, and A. Schwaighofer. Learning gaussian processes from multiple tasks. In
ICML ?05, 2005.
[23] J. Zhang, Z. Ghahramani, and Y. Yang. Flexible latent variable models for multi-task learning.
Mach. Learn., 73(3):221?242, 2008.
[24] J. Zhang, J. Zhang, Y. Yang, Z. Ghahramani, and Y. Yang. Learning multiple related tasks
using latent independent component analysis. In NIPS ?05, 2005.
9
| 4163 |@word multitask:11 briefly:1 middle:1 norm:1 lenk:1 covariance:1 jacob:1 tr:1 reduction:7 initial:1 liu:1 efficacy:1 score:1 denoting:1 rkhs:7 ours:1 tuned:3 outperforms:1 existing:5 current:2 nt:2 written:2 kdd:1 remove:1 plot:1 v:2 intelligence:1 fewer:3 plane:1 completeness:1 authority:1 simpler:1 zhang:3 constructed:1 consists:3 indeed:1 expected:1 behavior:3 multi:6 automatically:1 cpu:1 cache:1 considering:2 becomes:1 confused:1 provided:3 notation:3 linearity:2 begin:2 moreover:1 discover:1 what:1 kind:1 bakker:1 exactly:1 k2:5 uk:1 treat:1 limit:2 mach:4 subscript:1 black:1 might:1 initialization:1 suggests:2 averaged:2 testing:2 practice:3 implement:1 swiss:2 pontil:7 significantly:2 vert:1 projection:29 krishnapuram:1 get:2 onto:5 close:3 unlabeled:1 applying:1 www:1 equivalent:2 map:2 yt:3 straightforward:2 independently:4 convex:6 focused:1 survey:2 simplicity:1 splitting:1 recovery:1 regularize:1 embedding:1 handle:1 notion:5 variation:3 updated:1 pt:3 construction:1 heavily:1 hierarchy:1 exact:1 us:2 expensive:1 predicts:1 labeled:2 ft:7 taskwise:4 trade:2 mentioned:2 vanishes:1 convexity:3 personal:1 upon:1 represented:1 regularizer:12 forced:1 describe:2 london:2 artificial:1 whose:1 stanford:1 plausible:1 solve:4 niyogi:2 jointly:1 itself:2 differentiate:1 advantage:1 propose:2 reconstruction:1 causing:1 roweis:1 description:1 intuitive:1 exploiting:1 convergence:2 cluster:2 comparative:1 perfect:1 converges:1 help:2 depending:1 ac:1 montreal:1 school:19 solves:2 implemented:2 c:2 predicted:1 indicate:1 differ:2 closely:1 correct:1 attribute:1 education:1 require:3 hx:1 multilevel:1 fix:2 generalization:3 clustered:2 extension:1 proximity:1 mapping:4 vary:2 estimation:1 hwt:2 label:4 title:1 sensitive:1 city:2 tool:1 gaussian:1 rather:1 avoid:2 tar:1 shtml:1 improvement:8 likelihood:1 hk:5 baseline:29 sense:1 inference:1 cmm:1 issue:1 arg:3 among:1 classification:2 uncovering:1 flexible:1 proposes:1 art:1 fairly:1 initialize:1 equal:2 once:2 evgeniou:4 sampling:1 represents:3 park:1 yu:1 icml:2 nearly:2 representer:2 carin:3 future:1 report:1 others:1 belkin:3 few:2 randomly:4 dg:4 preserve:2 simultaneously:3 replaced:1 n1:1 partworth:1 introduces:1 arrives:1 regularizers:2 norwell:1 ambient:1 orthogonal:8 euclidean:5 gerber:2 re:4 increased:1 earlier:1 extensible:1 caruana:1 cost:5 introducing:1 eigenmaps:1 reported:2 dependency:1 answer:1 learnt:1 xue:1 synthetic:7 density:1 fundamental:1 international:1 off:2 together:1 again:3 thesis:1 dr:4 worse:4 external:4 resort:1 vison:1 student:5 explicitly:1 depends:1 later:3 performed:3 root:1 picked:2 red:3 competitive:1 bayes:1 capability:1 rmse:20 minimize:2 square:1 ni:1 roll:2 variance:2 ynt:1 who:1 characteristic:1 yield:3 correspond:2 bayesian:4 bristol:1 randomness:1 whenever:2 definition:1 naturally:2 associated:1 sampled:2 dataset:32 improves:2 dimensionality:5 hilbert:2 worry:1 higher:2 mtl:44 formulation:2 done:3 box:1 though:4 just:1 marketing:1 langford:1 until:1 relearn:3 working:1 hand:2 nonlinear:3 defines:1 scientific:1 believe:1 semisupervised:1 hal:2 usa:1 effect:1 utah:5 true:2 regularization:19 hence:2 alternating:2 during:1 noted:1 anything:1 samuel:1 generalized:3 performs:4 cp:3 silva:1 regularizing:2 novel:3 recently:1 common:7 salt:2 extend:3 belong:2 kluwer:1 refer:1 composition:1 significant:1 rd:3 tuning:1 pm:7 similarly:1 heskes:1 nonlinearity:2 dot:2 moving:1 surface:5 etc:1 recent:2 showed:1 optimizing:1 forcing:1 certain:1 binary:1 arbitrarily:2 vt:2 yi:5 multiple:5 violate:1 academic:1 arvind:1 cross:2 bach:1 equally:2 laplacian:2 underlies:1 regression:12 liao:2 expectation:2 iteration:2 kernel:20 represent:1 agarwal:1 justified:1 mtl_feat:2 leaving:1 crucial:1 publisher:1 umd:2 umiacs:1 unlike:3 pass:1 dpm:2 december:2 spirit:2 desarbo:1 yang:3 ideal:1 intermediate:1 iii:2 easy:2 split:2 enough:1 pratt:1 zi:1 hastie:1 inner:1 idea:4 simplifies:1 tm:6 expression:3 penalty:1 suffer:1 cause:1 clear:2 informally:1 amount:2 locally:1 tenenbaum:1 svms:1 regularization1:1 reduced:2 generate:2 http:2 outperform:5 zj:2 estimated:3 per:5 blue:2 write:1 affected:1 kept:1 imaging:1 graph:2 ram:1 year:1 run:2 uncertainty:1 extends:1 saying:1 almost:4 lake:2 adapted:1 constraint:1 precisely:1 sake:1 min:3 mated:1 performing:2 relatively:2 speedup:1 department:1 combination:1 poor:1 belonging:1 across:2 iccv:1 taken:1 pipeline:1 equation:1 remains:1 end:2 informal:1 generalizes:1 available:4 observe:1 hierarchical:1 appropriate:1 enforce:3 spectral:1 online2:1 encounter:1 weinberger:1 assumes:3 clustering:1 dirichlet:1 whitaker:1 daum:2 giving:4 ghahramani:2 especially:2 comparatively:1 micchelli:4 question:2 parametric:1 sha:1 md:1 exhibit:1 gradient:14 subspace:13 distance:15 maryland:1 thrun:1 w0:2 manifold:87 considers:1 reason:3 assuming:1 besides:1 code:5 modeled:1 minimizing:4 ying:1 negative:7 xnt:1 design:1 perform:7 observation:1 datasets:14 descent:2 immediate:1 regularizes:1 heterogeneity:1 incorporated:2 y1:1 reproducing:1 stack:1 canada:1 ttic:1 introduced:2 pair:2 required:2 optimized:1 learned:10 nip:4 ytt:1 address:1 able:4 usually:2 below:2 including:1 green:3 natural:1 examination:2 regularized:2 scheme:1 improve:2 rated:2 axis:2 categorical:1 tresp:1 prior:2 literature:1 geometric:2 tangent:1 xtt:1 relative:5 loss:3 mixed:1 interesting:1 limitation:1 versus:1 conjoint:1 validation:2 purchasing:1 editor:1 share:4 tasdizen:1 row:1 changed:1 summary:1 keeping:3 uchicago:1 institute:1 saul:2 curve:1 world:1 avoids:1 author:1 made:3 avg:4 projected:1 emphasize:4 relatedness:17 global:2 assumed:4 xi:4 alternatively:1 latent:3 decomposes:3 why:1 learn:21 transfer:5 did:1 significance:2 linearly:3 hyperparameters:3 repeated:2 fair:2 body:1 x1:1 referred:1 explicit:3 lie:7 jmlr:2 learns:3 young:1 externally:2 theorem:2 bad:1 xt:10 specific:2 gating:1 stl:40 hxt:1 restricting:1 adding:1 kr:3 phd:1 relearning:2 simply:1 schwaighofer:1 sindhwani:1 monotonic:1 corresponds:1 relies:1 dh:4 ma:1 acm:1 conditional:1 identity:1 formulated:1 room:1 shared:3 price:1 change:3 hard:1 except:1 wt:6 principal:4 total:3 experimental:3 college:1 support:1 mark:1 incorporate:1 evaluate:3 argyriou:8 phenomenon:1 |
3,494 | 4,164 | Distributed Dual Averaging in Networks
John C. Duchi1
Alekh Agarwal1
Martin J. Wainwright1,2
Department of Electrical Engineering and Computer Science1 and Department of Statistics2
University of California, Berkeley
Berkeley, CA 94720-1776
{jduchi,alekh,wainwrig}@eecs.berkeley.edu
Abstract
The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using
only local computation and communication. We develop and analyze distributed
algorithms based on dual averaging of subgradients, and provide sharp bounds on
their convergence rates as a function of the network size and topology. Our analysis clearly separates the convergence of the optimization algorithm itself from
the effects of communication constraints arising from the network structure. We
show that the number of iterations required by our algorithm scales inversely in
the spectral gap of the network. The sharpness of this prediction is confirmed both
by theoretical lower bounds and simulations for various networks.
1
Introduction
Network-structured optimization problems arise in a variety of application domains within the information sciences and engineering. A canonical example that arises in machine learning is the
problem of minimizing a loss function averaged over a large dataset (e.g. [16, 17]). With terabytes
of data, it is desirable (even necessary) to assign smaller subsets of the data to different processors, and the processors must communicate to find parameters that minimize the loss over the entire
dataset. Problems such as multi-agent coordination, estimation problems in sensor networks, and
packet routing also are all naturally cast as distributed convex minimization [1, 13, 24]. The seminal
work of Tsitsiklis and colleagues [22, 1] analyzed algorithms for minimization of a smooth function f known to several agents while distributing processing of components of the parameter vector
x ? Rn . More recently, a few researchers have shifted focus to problems in which each processor
locally has its own convex (potentially non-differentiable) objective function [18, 15, 21, 11].
In this paper, we provide a simple new subgradient algorithm for distributed constrained optimization of a convex function. We refer to it as a dual averaging subgradient method, since it is based on
maintaining and forming weighted averages of subgradients throughout the network. This approach
is essentially different from previously developed distributed subgradient methods [18, 15, 21, 11],
and these differences facilitate our analysis of network scaling issues?how convergence rates depend on network size and topology. Indeed, the second main contribution of this paper is a careful
analysis that demonstrates a close link between convergence of the algorithm and the underlying
spectral properties of the network. The convergence rates for a different algorithm given by the
papers [18, 15] grow exponentially in the number of nodes n in the network. Ram et al. [21] provide tighter analysis that yields convergence rates that scale cubically in the network size, but are
independent of the network topology. Consequently, their analysis does not capture the intuition
that distributed algorithms should converge faster on ?well-connected? networks?expander graphs
being a prime example?than on poorly connected networks (e.g., chains or cycles). Johansson et
al. [11] analyze a low communication peer-to-peer protocol that attains rates dependent on network
structure. However, in their algorithm only one node has a current parameter value, while all nodes
in our algorithm maintain good estimates of the optimum at all times. This is important in online
1
or streaming problems where nodes are expected to act or answer queries in real-time. In additional
comparison to previous work, our analysis yields network scaling terms that are often substantially
sharper. Our development yields an algorithm with convergence rate that scales inversely in the
spectral gap of the network. By exploiting known results on spectral gaps for graphs with n nodes,
we show that our algorithm obtains an !-optimal solution in O(n2 /!2 ) iterations for a single cycle
or path, O(n/!2 ) iterations for a two-dimensional grid, and O(1/!2 ) iterations for a bounded degree
expander graph. Simulation results show excellent agreement with these theoretical predictions.
2
Problem set-up and algorithm
In this section, we provide a formal statement of the distributed minimization problem and a description of the distributed dual averaging algorithm.
Distributed minimization: We consider an optimization problem based on functions that are distributed over a network. More specifically, let G = (V, E) be an undirected graph over the vertex
set V = {1, 2, . . . , n} with edge set E ? V ? V . Associated with each i ? V is convex funcd
tion fi : R!
? R, and our overarching goal is to solve the constrained optimization problem
n
minx?X n1 i=1 fi (x), where X is a closed convex set. Each function fi is convex and hence subdifferentiable, but need not be smooth. We assume without loss of generality that 0 ? X , since we
can simply translate X . Each node i ? V is associated with a separate agent, and each agent i maintains its own parameter vector xi ? Rd . The graph G imposes communication constraints on the
agents: in particular, agent i has local access to only the objective function fi and can communicate
directly only with its immediate neighbors j ? N (i) := {j ? V | (i, j) ? E}.
A concrete motivating example for these types of problems is the machine learning scenario described in Section 1. In this case, the set X is the parameter space of the learner. Each function fi is
the empirical loss over the subset of data assigned to processor i, and the average f is the empirical
loss over the entire dataset. We use cluster computing as our model, so each processor is a node in
the cluster and the graph G contains edges between processors connected with small latencies; this
setup avoids communication bottlenecks of architectures with a centralized master node.
Dual averaging: Our algorithm is based on a dual averaging algorithm [20] for minimization of
a (potentially nonsmooth) convex function f subject to the constraint that x ? X . We begin by
describing the standard version of the algorithm. The dual averaging scheme is based on a proximal
function ? : Rd ? R assumed to be strongly convex with respect to a norm %?%, more precisely,
2
?(y) ? ?(x) + '??(x), y ? x* + 12 %x ? y% for all x, y ? X . We assume w.l.o.g. that ? ? 0 on X
2
and that ?(0) = 0. Such proximal functions include the canonical quadratic ?(x) = 21 %x%2 , which
!d
is strongly convex with respect to the #2 -norm, and the negative entropy ?(x) = j=1 xi log xi ?xi ,
which is strongly convex with respect to the #1 -norm for x in the probability simplex.
We assume that each function fi is L-Lipschitz with respect to the same norm %?%?that is,
|fi (x) ? fi (y)| ? L %x ? y%
for x, y ? X .
(1)
Many cost functions fi satisfy this type of Lipschitz condition, for instance, convex functions on
a compact domain X or any polyhedral function on an arbitrary domain [8]. The Lipschitz condition (1) implies that for any x ? X and any subgradient gi ? ?fi (x), we have %gi %? ? L, where
%?%? denotes the dual norm to %?%, defined by %v%? := sup#u#=1 'v, u*.
The dual averaging algorithm generates a sequence of iterates {x(t), z(t)}?
t=0 contained within X ?
Rd . At time step t, the algorithm receives a subgradient g(t) ? ?f (x(t)), and updates
Here
z(t + 1) = z(t) ? g(t)
{?(t)}?
t=0
x(t + 1) = ??
X (?z(t + 1), ?(t)).
and
is a non-increasing sequence of positive stepsizes and
#
"
1
?(x)
'z,
x*
+
??
(z,
?)
:=
argmin
X
?
x?X
(2)
(3)
is a type of projection. Intuitively, given the current iterate (x(t), z(t)), the next iterate x(t + 1)
to chosen to minimize an averaged first-order approximation to the function f , while the proximal
2
function ? and stepsize ?(t) > 0 enforce that the iterates {x(t)}?
t=0 do not oscillate wildly. The algorithm is similar to the follow the perturbed/regularized leader algorithms developed in the context
of online learning [12], though in this form the algorithm seems to be originally due to Nesterov [20].
In Section 4, we relate the above procedure to the distributed algorithm we now describe.
Distributed dual averaging: Here we consider a novel extension of dual averaging to the distributed setting. For all times t, each node i ? V maintains a pair of vectors (xi (t), zi (t)) ? X ? Rd .
At iteration t, node i computes a subgradient gi (t) ? ?fi (xi (t)) of the local function fi and receives
{zj (t), j ? N (i)} from its neighbors. Its update of the current estimate xi (t) is based on a weighted
n?n
average of these parameters. To model the process, let P ? R!
be a doubly
! stochastic symmetric
n
matrix with Pij > 0 only if (i, j) ? E when i ,= j. Thus j=1 Pij = j?N (i) Pij = 1 for all
!n
!
i ? V and i=1 Pij = i?N (j) Pij = 1 for all j ? V . Given a non-increasing sequence {?(t)}?
t=0
of positive stepsizes, each node i ? V updates
$
zi (t + 1) =
Pji zj (t) ? gi (t), and xi (t + 1) = ??
(4)
X (?zi (t + 1), ?(t)),
j?N (i)
where the projection ??
X was defined in (3). In words, node i computes the new dual parameter
zi (t + 1) from a weighted average of its own subgradient gi (t) and the parameters {zj (t), j ? N (i)}
in its neighborhood; it then computes the local iterate xi (t + 1) by a proximal projection. We show
convergence of the local sequence {xi (t)}?
t=1 to an optimum of the global objective via the local
!T
average x
%i (T ) = T1 t=1 xi (t), which can evidently be computed in a decentralized manner.
3
Main results and consequences
We will now state the main results of this paper and illustrate some of their consequences. We give
the proofs and a deeper investigation of related corollaries at length in the sections that follow.
Convergence of distributed dual averaging: We start with a result on the convergence of the
distributed dual averaging algorithm that provides a decomposition of the error into an optimization
term and the cost associated with network
!n communication. In order to state this theorem, we define
%i (T ).
the averaged dual variable z?(t) := n1 i=1 zi (t), and we recall the local time-average x
?
Theorem 1 (Basic convergence result). Given a sequence {xi (t)}?
and
{z
(t)}
generated
by
i
t=0
t=0
?
the updates (4) with step size sequence {?(t)}?
,
for
each
node
i
?
V
and
any
x
?
X
,
we
have
t=0
f (%
xi (T )) ? f (x? ) ?
T
T
$
1
L2 $
3L
?(x? ) +
max
?(t ? 1) +
?(t) %?
z (t) ? zj (t)%? .
T ?(T )
2T t=1
T j=1,...,n t=1
Theorem 1 guarantees that after T steps of the algorithm, every node i ? V has access to a locally
defined quantity x
%i (T ) such that the difference f (%
xi (T )) ? f (x? ) is upper bounded by a sum of
three terms. The first two terms in the upper bound in the theorem are optimization error terms that
are common to subgradient algorithms. The third term is the penalty incurred due to having different
estimates at different nodes in the network, and it measures the deviation of each node?s estimate
of the average gradient from the true average gradient. Thus, roughly, Theorem 1 ensures that as
long the bound
z (t) ? zi (t)%? is tight enough, for appropriately chosen ?(t) (say
? on the deviation %?
?(t) ? 1/ t), the error of x
%i (T ) is small uniformly across all nodes i ? V .
Convergence rates and network topology: We now turn to investigation of the effects of network
topology on convergence rates. In this section,1 we assume that the network topology is static and
that communication occurs via a fixed doubly stochastic weight matrix P at every round. Since P
is symmetric and stochastic, it has largest singular value ?1 (P ) = 1. As the following result shows,
the convergence of our algorithm is controlled by the spectral gap ?(P ) := 1 ? ?2 (P ) of P .
Theorem 2 (Rates based on spectral gap). Under the conditions
? and notation of Theorem 1, suppose
R
1?? (P )
2
?
, we have
moreover that ?(x? ) ? R2 . With step size choice ?(t) =
4L t
?
RL
log(T n)
f (%
xi (T )) ? f (x? ) ? 8 ? ? &
for all i ? V .
T
1 ? ?2 (P )
1
We can weaken these conditions; see the long version of this paper for extensions to random P [4].
3
(a)
(b)
(c)
(d)
Figure 1. (a) A 3-connected cycle. (b) 1-connected two-dimensional grid with non-toroidal boundary
conditions. (c) A random geometric graph. (d) A random 3-regular expander graph.
This theorem establishes a tight connection between the convergence rate of distributed subgradient
methods and the spectral properties of the underlying network. The inverse dependence on the
spectral gap 1 ? ?2 (P ) is quite natural, since it is well-known to determine the rates of mixing in
random walks on graphs [14], and the propagation of information in our algorithm is integrally tied
to the random walk on the underlying graph with transition probabilities specified&
by P . Johansson
et al. [11] establish rates for their Markov incremental gradient method (MIGD) of n?ii /T
?, where
? = (I ? P + 1111( /n)?1 ; performing
an
eigen-decomposition
of
the
?
matrix
shows
that
n?ii is
&
always lower bounded by 1/ 1 ? ?2 (P ), our bound in Theorem 2.
Using Theorem 2, one can derive explicit convergence rates for several classes of interesting networks, and Figure 1 illustrates four graph topologies of interest. As a first example, the k-connected
cycle in panel (a) is formed by placing n nodes on a circle and connecting each node to its k neighbors on the right and left. The grid (panel (b)) is obtained by connecting nodes to their k nearest
neighbors in axis-aligned directions. In panel (c), we show a random geometric graph, constructed
by placing nodes uniformly at random in [0, 1]2 and connecting any two nodes separated by a distance less than some radius r > 0. These graphs are often used to model the connectivity patterns
of distibruted devices such as wireless sensor motes [7]. Finally, panel (d) shows an instance of a
bounded degree expander, which belongs to a special class of sparse graphs that have very good
mixing properties [3]. For many random graph models, a typical sample is an expander with high
probability (e.g. random degree regular graphs [5]). In addition, there are several deterministic constructions of expanders that are degree regular (see Section 6.3 of Chung [3] for further details).
In order to state explicit convergence rates, we need to specify a particular choice of the matrix
P that respects the graph structure. Let A ? Rn?n be the symmetric adjacency matrix of the
undirected graph G, satisfying
!nAij = 1 when (i, j) ? E and Aij = 0 otherwise. For each node
i ? V , let ?i = |N (i)| = j=1 Aij denote the degree of node i and define the diagonal matrix
D = diag{?1 , . . . , ?n }. Letting ?max = maxi?V ?i denote the maximum degree, we define
'
(
1
Pn (G) := I ?
D?A ,
(5)
?max + 1
which is symmetric and doubly stochastic by construction. The following result summarizes our
conclusions for the choice (5) of stochastic matrix for different network topologies. We state the results in terms of optimization error achieved after T iterations and the number of iterations TG (!; n)
required to achieve error ! for network type G with n nodes. (These are equivalent statements.)
Corollary 1. Under the conditions of Theorem 2, using P = Pn (G) gives the following rates.
' RL n log(T n) (
? 2 /!2 ).
(a) k-connected paths and cycles: f (%
xi (T )) ? f (x? ) = O ?
, T (!; n) = O(n
k
T
(b) k-connected
?
n?
?
n grids: f (%
xi (T )) ? f (x? ) = O
' RL
?
T
?
n log(T n) (
,
k
2
?
T (!; n) = O(n/!
).
)
(c) Random geometric graphs with connectivity radius r = ?( log1+# n/n) for any ! > 0:
' RL ) n
(
2
?
f (%
xi (T )) ? f (x? ) = O ?
log n log(T n) with high-probability, T (!; n) = O(n/! ).
T
(d) Expanders with bounded
ratio( of minimum to
' RL
2
?
log(T
n) , T (!; n) = O(1/!
).
f (%
xi (T )) ? f (x? ) = O ?
T
4
maximum
node
degree:
By comparison, the results in the paper [11] give similar bounds for grids and cycles, but for
d-dimensional grids we have T (!; n) = O(n2/d /!2 ) while MIGD achieves T (!; n = O(n/!2 );
for expanders and the complete graph MIGD achieves T (!; n) = O(n/!2 ). We provide the proof of
Corollary 1 in Appendix A.?Up to logarithmic factors, the optimization term in the convergence rate
is always of the order RL/ T , while the remaining terms vary depending on the network topology.
(
'
1
iterations are required
In general, Theorem 2 implies that at most TG (!; n) = O #12 ? 1??2 (P
(G))
n
to achieve an !-accurate solution when using the matrix Pn (G) defined in (5). It is interesting to
ask whether this upper bound is actually tight. On one hand, it is known
' (that even for centralized optimization algorithms, any subgradient method requires at least ? #12 iterations to achieve
!-accuracy [19], so that the 1/!2 term is unavoidable. The next proposition addresses the complementary issue, namely whether the inverse spectral gap term is unavoidable for the dual averaging
2
algorithm. For the quadratic proximal function ?(x) = 21 %x%2 , the following result establishes a
lower bound on the number of iterations in terms of graph topology and network structure:
Proposition 1. Consider the dual averaging algorithm (4) with quadratic proximal function and
communication matrix Pn (G). For any graph G with n nodes, the number of
T(G (c; n)
' iterations
1
required to achieve a fixed accuracy c > 0 is lower bounded as TG (c; n) = ? 1??2 (P
.
n (G))
The proof of this result, given in Appendix B, involves constructing a ?hard? optimization problem
and lower bounding the number of iterations required for our algorithm to solve it. In conjunction
with Corollary 1, Proposition 1 implies that our predicted network scaling is sharp. Indeed, in
Section 5, we show that the theoretical scalings from Corollary 1?namely, quadratic, linear, and
constant in network size n?are well-matched in simulations of our algorithm.
4
Proof sketches
Setting up the analysis: Using techniques similar to some past work [18], we establish conver!n
z (t), ?). The average sum of
gence via the two sequences z?(t) := n1 i=1 zi (t) and y(t) := ??
X (??
gradients z?(t) evolves in a very simple way: in particular, we have
n
n
n
n
(
1 $$'
1$
1$
z?(t + 1) =
Pji (zj (t) ? z?(t)) + z?(t) ?
gj (t) = z?(t) ?
gj (t), (6)
n i=1 j=1
n j=1
n j=1
where the second equality follows from the double-stochasticity of P . The simple evolution (6) of
the averaged dual sequence allows us to avoid difficulties with the non-linearity of projection that
have been challenging in earlier work. Before proceeding with the proof of Theorem 1, we state a
few useful results regarding the convergence of the standard dual averaging algorithm [20].
d
?
Lemma 2 (Nesterov). Let {g(t)}?
t=1 ? R be an arbitrary sequence and {x(t)}t=1 defined by the
?
updates (2). For a non-increasing sequence {?(t)}t=0 of positive stepsizes and any x? ? X ,
T
T
$
1
1$
2
?(t ? 1) %g(t)%? +
?(x? ).
'g(t), x(t) ? x? * ?
2
?(T
)
t=1
t=1
Our second lemma allows us to restrict our analysis to the sequence {y(t)}?
t=0 defined previously.
?
?
?
Lemma 3. Consider sequences {xi (t)}t=1 , {zi (t)}t=0 , and {y(t)}t=0 that evolve according to (4).
Then for each i ? V and any x? ? X , we have
T
T
T
$
$
$
f (xi (t)) ? f (x? ) ?
f (y(t)) ? f (x? ) + L
?(t) %?
z (t) ? zi (t)%? .
t=1
t=1
t=1
Now we give the proof of the first theorem.
?
Proof of Theorem 1: Our proof is based on analyzing the sequence {y(t)}?
t=0 . For any x ? X ,
T
T
n
T
n
$
$
$
1$
1$
f (y(t)) ? f (x? ) =
fi (xi (t)) ? f (x? ) +
[fi (y(t)) ? fi (xi (t))]
n i=1
n i=1
t=1
t=1
t=1
?
n
T $
n
T
$
L
1 $$
%y(t) ? xi (t)% ,
fi (xi (t)) ? f (x? ) +
n t=1 i=1
n
t=1 i=1
5
(7)
by the L-Lipschitz continuity of the fi . Letting gi (t) ? ?fi (xi (t)) be a subgradient of fi at xi (t),
T
n
n
n
$
$
1 $$
fi (xi (t)) ? fi (x? ) ?
'gi (t), y(t) ? x? * +
'gi (t), xi (t) ? y(t)* .
n t=1 i=1
i=1
i=1
(8)
!t?1 !n
1
?(x)}.
By definition of z?(t) and y(t), we have y(t) = argminx?X { n1 s=1 i=1 'gi (s), x* + ?(t)
Thus, we see that the first term in the decomposition (8) can be written in the same way as the bound
in Lemma 2, and as a consequence, we have the bound
T
1$
n t=1
*$
n
i=1
gi (t), y(t) ? x
?
+
?
T
L2 $
1
?(x? ).
?(t ? 1) +
2 t=1
?(T )
(9)
It remains to control the final two terms in the bounds (7) and (8). Since %gi (t)%? ? L by assumption, we use the ?-Lipschitz continuity of the projection ??
X (?, ?) [9, Theorem X.4.2.1] to see
T $
n
$
L
t=1 i=1
=
n
T
%y(t) ? xi (t)% +
n
T
n
1 $$
2L $ $
'gi (t), xi (t) ? y(t)* ?
%y(t) ? xi (t)%
n t=1 i=1
n t=1 i=1
T
n
T $
n
, 2L $
2L $ $ ,
, ?
,
?
(??
z
(t),
?(t))
?
?
(?z
(t),
?(t))
?
?(t) %?
z (t) ? zi (t)%? .
,?X
,
i
X
n t=1 i=1
n t=1 i=1
Combining this bound with (7) and (9) yields the running sum bound
T
$
t=1
.
f (y(t))?f (x? ) ?
T
T
n
1
L2 $
2L $ $
?(x? )+
?(t?1)+
?(t) %?
z (t) ? zj (t)%? . (10)
?(T )
2 t=1
n t=1 j=1
Applying Lemma 3 to (10) gives that
!T
t=1 [f (xi (t))
? f (x? )] is upper bounded by
T
T
n
T
$
1
L2 $
2L $ $
?(x? ) +
?(t ? 1) +
?(t) %?
z (t) ? zj (t)%? + L
?(t) %?
z (t) ? zi (t)%? .
?(T )
2 t=1
n t=1 j=1
t=1
Dividing both sides by T and using convexity of f yields the bound in Theorem 1.
Proof of Theorem 2: For this proof sketch, we adopt the following notational conventions. For
an n ? n matrix B, we call its singular values ?1 (B) ? ?2 (B) ? ? ? ? ? ?n (B) ? 0. For a real
symmetric B, we use ?1 (B)
?2 (B) ? . . . ? ?n (B) to denote the n real eigenvalues of B. We let
!?
n
?n = {x ? Rn | x / 0, i=1 xi = 1} denote the n-dimensional probability simplex. We make
frequent use of the following inequality [10]: for any positive integer t = 1, 2, . . . and any x ? ?n ,
, t
,
,P x ? 11/n,
TV
=
,
,
? ,
?
1,
,P t x ? 11/n, ? 1 n ,P t x ? 11/n, ? 1 ?2 (P )t n.
1
2
2
2
2
(11)
!T !n
z (t) ? zi (t)%? .
We focus on controlling the network error term in Theorem 1, L
t=1
i=1 ?(t) %?
n
Define the matrix ?(t, s) = P t?s+1 . Let [?(t, s)]ji be entry j of column i of ?(t, s). Then
zi (t + 1) =
n
$
j=1
[?(t, s)]ji zj (s) ?
/$
t
n
$
r=s+1
j=1
0
[?(t, r)]ji gj (r ? 1) ? gi (t).
(12)
Clearly the above reduces to the standard update (4) when s = t. Since z?(t) evolves simply as in
(6), we assume w.l.o.g. that zi (0) = 0 and use (12) to see
zi (t) ? z?(t) =
n
t?1 $
$
s=1 j=1
(1/n ? [?(t ? 1, s)]ji )gj (s ? 1) +
6
/
0
n
1$
(gj (t ? 1) ? gi (t ? 1)) . (13)
n j=1
We use the fact that %gi (t)%? ? L for all i and t and (13) to see that
,$
0,
/ $
n
n
,
, t?1 $
1
,
g
(t
?
1)
?
g
(t
?
1)
%?
z (t) ? zi (t)%? = ,
(1/n
?
[?(t
?
1,
s)]
)g
(s
?
1)
+
j
i
ji j
,
,
n j=1
?
s=1 j=1
?
t?1 $
n
$
?
t?1
$
s=1 j=1
s=1
n
%gj (s ? 1)%? |(1/n) ? [?(t ? 1, s)]ji | +
1$
%gj (t ? 1) ? gi (t ? 1)%?
n i=1
L %[?(t ? 1, s)]i ? 11/n%1 + 2L.
(14)
Now we break the sum in (14) into two terms separated by a cutoff point %
t. The first term consists
of ?throwaway? terms, that is, timesteps s for which the Markov chain with transition matrix P
has not mixed, while the second consists of steps s for which %[?(t ? 1, s)]i ? 11/n%1 is small.
Note that the indexing on ?(t ? 1, s) = P t?s+1 implies that for small
? s, ?(t ? 1, s) is close to
uniform. From the inequality (11), we have %[?(t, s)]j ? 11/n%1 ? n?2 (P )t?s+1 . Hence, if
?
#?1
t ? s ? loglog
n!. Thus, by setting
?2 (P )?1 ? 1, then we are?guaranteed %[?(t, s)]j ? 11/n%1 ?
?
log(T n)
?1
!
= T n, for t ? s + 1 ? log ?2 (P )?1 , we have %[?(t, s)]j ? 11/n%1 ? T1 . For larger s, we
?
simply have %[?(t, s)]j ? 11/n% ? 2. The above suggests that we split the sum at %
t = log T n?1 .
1
log ?2 (P )
Since t ? 1 ? (t ? %
t) = %
t and there are at most T steps in the summation,
%?
z (t) ? zi (t)%? ? L
t?1
$
s=t?!
t
%?(t ? 1, s)ei ? 11/n%1 + L
t?1?
$ !t
s=1
%?(t ? 1, s)ei ? 11/n%1 + 2L
?
?
log(T n)
log(T n)
? 2L
+ 3L ? 2L
+ 3L.
log ?2 (P )?1
1 ? ?2 (P )
(15)
The last inequality follows from the concavity of log(?), since log ?2 (P )?1 ? 1 ? ?2 (P ).
Combining (15) with the running sum bound in (10) of the proof of the basic theorem, Theorem 1,
we find that for x? ? X ,
? $
T
T
T
T
$
$
1
L2 $
?
?
2
2 log(T n)
f (y(t)) ? f (x ) ?
?(x ) +
?(t ? 1) + 6L
?(t) + 4L
?(t).
?(T )
2 t=1
1 ? ?2 (P ) t=1
t=1
t=1
Appealing to Lemma 3 allows us to obtain
slightly
? the same result on the sequence ?xi (t) with
!T ?1/2
2
worse constants. Since t=1 t
? 2 T ? 1, using the assumption that ?(x ) ? R , bounding
!T
f (%
xi (T )) ? T1 t=1 f (xi (t)), and setting ?(t) as in the theorem statement completes the proof.
5
Simulations
In this section, we report experimental results on the network scaling behavior of the distributed
dual averaging algorithm as a function of the graph structure and number of processors n. These
results illustrate the excellent agreement of the empirical behavior with our theoretical predictions.
For all experiments reported here, we consider distributed minimization of a sum of hinge losses.
We solve a synthetic classification problem, in which we are given n pairs of the form (ai , yi ) ?
Rd ? {?1, +1}, where ai ? Rd corresponds to a feature vector and yi ? {?1, +1} is the associated
label. Given the shorthand notation [c]+ := max{0, c}, the hinge loss associated with a linear
classifier based on x is given by fi (x) = [1 ? yi 'ai , x*]+ . The global objective is given by the sum
!n
f (x) := n1 i=1 [1 ? yi 'ai , x*]+ . Setting L = maxi %ai %2 , we note that f is L-Lipschitz and
non-smooth at any point with 'ai , x* = yi . As is common, we impose a quadratic regularization,
choosing X = {x ? Rd | %x%2 ? 5}. Then for a given graph size n, we form a random instance
of this SVM classification problem. Although this is a specific ensemble of problems, we have
observed qualitatively similar behavior for other problem classes. In all cases, we use the optimal
setting of the step size ? specified in Theorem 2 and Corollary 1.
7
0
f(x(t)) - f(x? )
10
?1
T (!; 400)
10
T (!; 625)
?2
T (!; 225)
10
0
200
400
600
800
Iterations
1400
500
1200
450
1000
1200
120
110
800
600
Steps to ?
400
1000
Steps to ?
Steps to ?
Figure 2. Plot of the function error versus the number of iterations for a grid
graph. Each curve corresponds to a grid
with a different number of nodes (n ?
{225, 400, 600}). As expected, larger
graphs require more iterations to reach
a pre-specified tolerance ! > 0, as defined by the iteration number T (!; n).
The network scaling problem is to determine how T (!; n) scales as a function of n.
n = 225
n = 400
n = 625
350
300
100
90
250
80
400
200
200
0
200
400
600
Nodes n
(a)
800
1000
150
0
200
400
600
Nodes n
(b)
800
1000
70
0
200
400
600
Nodes n
800
1000
(c)
Figure 3. Each plot shows the number of iterations required to reach a fixed accuracy ! (vertical axis)
versus the network size n (horizontal axis). Panels show the same plot for different graph topologies:
(a) single cycle; (b) two-dimensional grid; and (c) bounded degree expander.
Figure 2 provides plots of the function error maxi [f (%
xi (T ) ? f (x? )] versus the number of iterations
for grid graphs with a varying number of nodes n ? {225, 400, 625}. In addition to demonstrating
convergence, these plots also show how the convergence time scales as a function of the graph size.
We also experimented with the algorithm and stepsize suggested by previous analyses [21]; the
resulting stepsize is so small that the method effectively jams and makes no progress.
In Figure 3, we compare the theoretical predictions of Corollary 1 with the actual behavior of dual
subgradient averaging. Each panel shows the function TG (!; n) versus the graph size n for the fixed
value ! = 0.1; the three different panels correspond to different graph types: cycles (a), grids (b) and
expanders (c). In the panels, each point on the solid blue curve is the average of 20 trials, and the
bars show standard errors. For comparison, the dotted black line shows the theoretical prediction.
Note that the agreement between the empirical behavior and theoretical predictions is excellent in
all cases. In particular, panel (a) exhibits the quadratic scaling predicted for the cycle, panel (b)
exhibits the the linear scaling expected for the grid, and panel (c) shows that expander graphs have
the desirable property of having constant network scaling.
6
Conclusions
In this paper, we have developed and analyzed an efficient algorithm for distributed optimization
based on dual averaging of subgradients. In addition to establishing convergence, we provided
a careful analysis of the algorithm?s network scaling. Our results show an inverse scaling in the
spectral gap of the graph, and we showed that this prediction is tight in general via a matching
lower bound. We have implemented our method, and our simulations show that these theoretical
predictions provide a very accurate characterization of its behavior. In the extended version of this
paper [4], we also show that it is possible to extend our algorithm and analysis to the cases in which
communication is random and not fixed, the algorithm receives stochastic subgradient information,
and for minimization of composite regularized objectives of the form f (x) + ?(x).
Acknowledgements: JCD was supported by an NDSEG fellowship and Google. AA was supported by a Microsoft Research Fellowship. In addition, AA was partially supported by NSF grants
DMS-0707060 and DMS-0830410. MJW and AA were partially supported by AFOSR-09NL184.
8
References
[1] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and distributed computation: numerical methods.
Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1989.
[2] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Randomized gossip algorithms. IEEE Transactions on Information Theory, 52(6):2508?2530, 2006.
[3] F.R.K. Chung. Spectral Graph Theory. AMS, 1998.
[4] J. Duchi, A. Agarwal, and M. Wainwright. Dual averaging for distributed optimization: convergence analysis and network scaling. URL http://arxiv.org/abs/1005.2012, 2010.
[5] J. Friedman, J. Kahn, and E. Szemer?edi. On the second eigenvalue of random regular graphs.
In Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing, pages
587?598, New York, NY, USA, 1989. ACM.
[6] R. Gray. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information Theory, 2(3):155?239, 2006.
[7] P. Gupta and P. R. Kumar. The capacity of wireless networks. IEEE Transactions on Information Theory, 46(2):388?404, 2000.
[8] J. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms I.
Springer, 1996.
[9] J. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms II.
Springer, 1996.
[10] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[11] B. Johansson, M. Rabi, and M. Johansson. A randomized incremental subgradient method for
distributed optimization in networked systems. SIAM Journal on Optimization, 20(3):1157?
1170, 2009.
[12] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291?307, 2005.
[13] V. Lesser, C. Ortiz, and M. Tambe, editors. Distributed Sensor Networks: A Multiagent Perspective, volume 9. Kluwer Academic Publishers, May 2003.
[14] D. Levin, Y. Peres, and E. Wilmer. Markov Chains and Mixing Times. American Mathematical
Society, 2008.
[15] I. Lobel and A. Ozdaglar. Distributed subgradient methods over random networks. Technical
Report 2800, MIT LIDS, 2008.
[16] R. McDonald, K. Hall, and G. Mann. Distributed training strategies for the structured perceptron. In North American Chapter of the Association for Computational Linguistics (NAACL),
2010.
[17] A. Nedic and D. P. Bertsekas. Incremental subgradient methods for nondifferentiable optimization. SIAM Journal on Optimization, 12(1):109?138, 2001.
[18] A. Nedic and A. Ozdaglar. Distributed subgradient methods for multi-agent optimization.
IEEE Transactions on Automatic Control, 54:48?61, 2009.
[19] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization.
Wiley, New York, 1983.
[20] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming A, 120(1):261?283, 2009.
[21] S. Sundhar Ram, A. Nedic, and V. V. Veeravalli. Distributed subgradient projection algorithm
for convex optimization. In IEEE International Conference on Acoustics, Speech, and Signal
Processing, pages 3653?3656, 2009.
[22] J. Tsitsiklis. Problems in decentralized decision making and computation. PhD thesis, Massachusetts Institute of Technology, 1984.
[23] U. von Luxburg, A. Radl, and M. Hein. Hitting times, commute distances, and the spectral gap
for large random geometric graphs. URL http://arxiv.org/abs/1003.1266, 2010.
[24] L. Xiao, S. Boyd, and S. J. Kim. Distributed average consensus with least-mean-square deviation. Journal of Parallel and Distributed Computing, 67(1):33?46, 2007.
9
| 4164 |@word trial:1 version:3 seems:1 norm:5 johansson:4 simulation:5 decomposition:3 commute:1 solid:1 contains:1 past:1 wainwrig:1 current:3 must:1 written:1 john:1 numerical:1 plot:5 update:6 device:1 iterates:2 provides:2 node:32 characterization:1 org:2 mathematical:2 constructed:1 symposium:1 consists:2 doubly:3 shorthand:1 polyhedral:1 manner:1 expected:3 indeed:2 behavior:6 roughly:1 multi:2 actual:1 increasing:3 begin:1 provided:1 underlying:3 bounded:8 notation:2 moreover:1 panel:11 matched:1 linearity:1 argmin:1 substantially:1 developed:3 ghosh:1 nj:1 jduchi:1 guarantee:1 berkeley:3 every:2 act:1 demonstrates:1 toroidal:1 classifier:1 control:2 ozdaglar:2 grant:1 bertsekas:2 positive:4 t1:3 engineering:2 local:8 before:1 consequence:3 analyzing:1 establishing:1 path:2 black:1 suggests:1 challenging:1 tambe:1 nemirovski:1 averaged:4 horn:1 subdifferentiable:1 procedure:1 empirical:4 composite:1 projection:6 matching:1 word:1 pre:1 regular:4 boyd:2 close:2 prentice:1 context:1 applying:1 seminal:1 optimize:1 equivalent:1 deterministic:1 overarching:1 convex:16 sharpness:1 construction:2 suppose:1 controlling:1 programming:1 agreement:3 trend:1 satisfying:1 observed:1 electrical:1 capture:1 ensures:1 connected:8 cycle:9 intuition:1 convexity:1 complexity:1 nesterov:3 depend:1 tight:4 efficiency:1 learner:1 conver:1 various:1 integrally:1 chapter:1 separated:2 describe:1 query:1 neighborhood:1 choosing:1 peer:2 quite:1 larger:2 solve:3 say:1 otherwise:1 toeplitz:1 gi:16 itself:1 final:1 online:3 sequence:14 differentiable:1 evidently:1 eigenvalue:2 hiriart:2 frequent:1 aligned:1 combining:2 networked:1 translate:1 poorly:1 mixing:3 achieve:4 description:1 exploiting:1 convergence:23 cluster:2 optimum:2 double:1 prabhakar:1 incremental:3 illustrate:2 develop:1 derive:1 depending:1 nearest:1 progress:1 dividing:1 implemented:1 predicted:2 involves:1 implies:4 convention:1 direction:1 radius:2 stochastic:6 packet:1 routing:1 mann:1 adjacency:1 jam:1 require:1 assign:1 investigation:2 proposition:3 tighter:1 duchi1:1 summation:1 extension:2 mjw:1 hall:2 achieves:2 vary:1 adopt:1 estimation:1 label:1 coordination:1 largest:1 establishes:2 weighted:3 minimization:9 mit:1 clearly:2 sensor:3 always:2 kalai:1 pn:4 avoid:1 stepsizes:3 varying:1 conjunction:1 corollary:7 focus:2 notational:1 attains:1 kim:1 am:1 dependent:1 streaming:1 cubically:1 entire:2 kahn:1 issue:2 dual:24 classification:2 development:1 constrained:2 special:1 having:2 placing:2 simplex:2 nonsmooth:2 report:2 few:2 argminx:1 maintain:1 ortiz:1 n1:5 microsoft:1 ab:2 friedman:1 centralized:2 interest:1 analyzed:2 primal:1 chain:3 accurate:2 edge:2 necessary:1 walk:2 circle:1 hein:1 theoretical:8 weaken:1 instance:3 column:1 earlier:1 tg:4 cost:2 vertex:1 subset:2 deviation:3 entry:1 uniform:1 levin:1 johnson:1 motivating:1 reported:1 answer:1 perturbed:1 eec:1 proximal:6 synthetic:1 international:1 river:1 randomized:2 siam:2 connecting:3 concrete:1 connectivity:2 thesis:1 von:1 unavoidable:2 ndseg:1 possibly:1 worse:1 american:2 chung:2 north:1 inc:1 satisfy:1 tion:1 break:1 closed:1 analyze:2 sup:1 start:1 maintains:2 parallel:2 contribution:1 minimize:2 formed:2 square:1 accuracy:3 ensemble:1 yield:5 correspond:1 confirmed:1 researcher:1 processor:7 reach:2 definition:1 colleague:1 dm:2 naturally:1 associated:5 proof:12 static:1 dataset:3 massachusetts:1 ask:1 recall:1 actually:1 originally:1 follow:2 specify:1 though:1 strongly:3 generality:1 wildly:1 hand:1 receives:3 sketch:2 horizontal:1 ei:2 veeravalli:1 propagation:1 google:1 continuity:2 gray:1 facilitate:1 effect:2 usa:2 naacl:1 true:1 evolution:1 hence:2 assigned:1 equality:1 regularization:1 symmetric:5 round:1 complete:1 mcdonald:1 duchi:1 novel:1 recently:1 fi:22 common:2 rl:6 ji:6 exponentially:1 volume:1 extend:1 association:1 jcd:1 kluwer:1 refer:1 cambridge:1 ai:6 rd:7 automatic:1 grid:12 stochasticity:1 access:2 alekh:2 gj:7 own:3 showed:1 perspective:1 belongs:1 prime:1 scenario:1 inequality:3 yi:5 minimum:1 additional:1 impose:1 terabyte:1 converge:1 determine:2 signal:1 ii:3 desirable:2 reduces:1 smooth:3 technical:1 faster:1 academic:1 long:2 controlled:1 prediction:8 basic:2 essentially:1 arxiv:2 iteration:18 agarwal:1 achieved:1 nl184:1 addition:4 fellowship:2 completes:1 grow:1 singular:2 publisher:1 appropriately:1 subject:1 expander:7 undirected:2 call:1 integer:1 split:1 enough:1 variety:1 iterate:3 zi:17 timesteps:1 architecture:1 topology:11 restrict:1 regarding:1 lesser:1 bottleneck:1 whether:2 distributing:1 url:2 penalty:1 speech:1 york:2 oscillate:1 useful:1 latency:1 locally:2 http:2 canonical:2 zj:8 shifted:1 dotted:1 nsf:1 arising:1 blue:1 statistics2:1 four:1 demonstrating:1 cutoff:1 ram:2 graph:36 subgradient:19 sum:9 luxburg:1 inverse:3 master:1 communicate:2 throughout:1 decision:2 summarizes:1 scaling:12 appendix:2 sundhar:1 bound:16 guaranteed:1 quadratic:6 annual:1 constraint:3 precisely:1 generates:1 kumar:1 subgradients:3 performing:1 vempala:1 martin:1 mote:1 department:2 structured:2 according:1 tv:1 smaller:1 across:1 slightly:1 appealing:1 evolves:2 lid:1 making:1 intuitively:1 indexing:1 previously:2 remains:1 describing:1 turn:1 urruty:2 letting:2 decentralized:3 radl:1 spectral:12 enforce:1 stepsize:3 pji:2 eigen:1 shah:1 denotes:1 remaining:1 include:1 running:2 linguistics:1 maintaining:1 hinge:2 establish:2 rabi:1 society:1 objective:6 quantity:1 occurs:1 strategy:1 dependence:1 diagonal:1 exhibit:2 minx:1 gradient:4 distance:2 separate:2 link:1 gence:1 capacity:1 nondifferentiable:1 consensus:1 length:1 ratio:1 minimizing:1 setup:1 potentially:2 sharper:1 statement:3 relate:1 negative:1 twenty:1 upper:5 vertical:1 markov:3 immediate:1 peres:1 extended:1 communication:10 rn:3 sharp:2 arbitrary:2 edi:1 cast:1 required:6 pair:2 specified:3 connection:1 namely:2 california:1 acoustic:1 address:1 suggested:1 bar:1 pattern:1 max:4 wainwright:1 natural:1 difficulty:1 regularized:2 szemer:1 nedic:3 scheme:1 technology:1 inversely:2 axis:3 log1:1 review:1 geometric:4 l2:5 acknowledgement:1 evolve:1 afosr:1 loss:7 multiagent:1 mixed:1 interesting:2 versus:4 foundation:1 incurred:1 agent:7 degree:8 pij:5 imposes:1 xiao:1 editor:1 throwaway:1 echal:2 supported:4 wireless:2 last:1 wilmer:1 tsitsiklis:3 formal:1 aij:2 deeper:1 side:1 circulant:1 neighbor:4 perceptron:1 institute:1 sparse:1 distributed:29 tolerance:1 boundary:1 curve:2 transition:2 avoids:1 yudin:1 computes:3 concavity:1 qualitatively:1 transaction:3 obtains:1 compact:1 global:3 assumed:1 xi:38 leader:1 ca:1 excellent:3 constructing:1 domain:3 protocol:1 diag:1 main:3 bounding:2 arise:1 expanders:4 n2:2 complementary:1 gossip:1 ny:1 wiley:1 explicit:2 tied:1 third:1 loglog:1 theorem:23 specific:1 maxi:3 r2:1 experimented:1 svm:1 gupta:1 effectively:1 phd:1 illustrates:1 gap:9 entropy:1 logarithmic:1 simply:3 saddle:1 forming:1 hitting:1 contained:1 partially:2 springer:2 aa:3 corresponds:2 acm:2 goal:2 consequently:1 careful:2 lipschitz:6 lemar:2 hard:1 specifically:1 typical:1 uniformly:2 averaging:19 lemma:6 experimental:1 arises:1 wainwright1:1 |
3,495 | 4,165 | MAP Estimation for Graphical Models by
Likelihood Maximization
Shlomo Zilberstein
Department of Computer Science
University of Massachusetts
Amherst, MA
[email protected]
Akshat Kumar
Department of Computer Science
University of Massachusetts
Amherst, MA
[email protected]
Abstract
Computing a maximum a posteriori (MAP) assignment in graphical models is a
crucial inference problem for many practical applications. Several provably convergent approaches have been successfully developed using linear programming
(LP) relaxation of the MAP problem. We present an alternative approach, which
transforms the MAP problem into that of inference in a mixture of simple Bayes
nets. We then derive the Expectation Maximization (EM) algorithm for this mixture that also monotonically increases a lower bound on the MAP assignment
until convergence. The update equations for the EM algorithm are remarkably
simple, both conceptually and computationally, and can be implemented using a
graph-based message passing paradigm similar to max-product computation. Experiments on the real-world protein design dataset show that EM?s convergence
rate is significantly higher than the previous LP relaxation based approach MPLP.
EM also achieves a solution quality within 95% of optimal for most instances.
1
Introduction
Graphical models provide an effective framework to model complex systems via simpler local interactions and also provide an insight into the structure of the underlying probabilistic model. In
particular, we focus on the class of undirected models called Markov random fields (MRFs) for
which the joint distribution can be specified as the product of potential functions over the cliques
of the graph. For many practical problems modeled using MRFs, finding the maximum a posteriori
(MAP) assignment or the most probable assignment to the variables in the graph is a key inference problem. For example, MAP estimation has been applied to image processing in computer
vision [17, 11], protein design and protein side-chain prediction problems [17, 11], and natural language processing [7]. Finding the MAP assignment is NP-hard in general except for tree-structured
graphs and graphs with bounded treewidth [5, 3]. This further underscores the need for developing
scalable approximation algorithms that provide good solution quality.
Recently, many algorithms have been proposed for approximating the MAP problem [15, 5, 11, 9, 6].
Particularly, linear programming (LP) relaxation of the MAP problem has emerged as a popular
technique to solve large-scale problems such as protein design and prediction problems [17, 10, 11].
Such approaches relax the constraint that the solution for the MAP problem be integral. However, for
large problems such as protein design, the large size of the LP prohibits the application of standard
LP-solvers [17]. To alleviate such scalability issues, convergent message passing algorithms have
been introduced, which monotonically decrease the dual objective of the LP relaxation [5, 11, 9].
Convergence to the global optima is not guaranteed in general, but when the solution is integral, it
can be shown to be globally optimal. The main advantage of these approaches lies in their ability
to provide an upper bound on the problem and a certificate of optimality when upper bound is
sufficiently close to the decoded solution.
1
In our work, we take a different approach to the MAP problem based on mean field methods in
variational inference [16]. First, we present an alternate representation of the MAP problem by decomposing the MRF into a finite-mixture of simple Bayes nets in which maximizing the likelihood
of a special variable is equivalent to solving the MAP problem. Our approach is inspired by recent developments in planning by probabilistic inference and goal-directed planning [1, 13, 14, 12].
Second, using this alternate representation, we derive the EM algorithm for approximate MAP estimation. EM increases the lower bound on the MAP assignment monotonically until convergence
and lends itself naturally to a graph-based message passing implementation.
The main advantage of the EM approach lies in settings where a good approximation to MAP needs
to be generated quickly. In our experiments on some of the largest protein design problems [17, 11],
we show that EM increases the lower bound on MAP rapidly. This attribute of EM combined
with the Max-Product LP algorithm (MPLP) [5, 11] that decreases the upper bound rapidly (as
observed empirically) yields a new hybrid approach that provides quality-bounded solutions significantly faster than previous approaches. Although convergence to the global optima is not guaranteed,
EM achieves an average solution quality within 95% of optimal for the protein design problems and
is significantly faster than both MPLP [5, 11] and max-product (MP) [8]. We show that each iteration of EM is faster than that of max-product or MPLP by a factor related to the average degree
of the graph. Empirically, the speedup factor can be as high as 30 for densely connected problems.
We also show that EM is an embarrassingly parallel algorithm and can be parallelized easily to further speedup the convergence. Finally we also discuss potential pitfalls that are inherent in the EM
formulation and highlight settings in which EM may not perform well.
2
Markov Random Fields and the MAP Problem
A pairwise Markov random field (MRF) can be described by an undirected graph G = (V, E) consisting of a set of nodes, one per variable in x = {x1 , . . . , xn }, and a set of edges that connect pairs
of nodes. A variable can take any value from a set of possible values referred to as the domain of
that variable. An edge (i, j) between nodes xi and xj specifies a function ?ij . The joint assignment
x has the probability:
1 P
p(x; ?) = e ij?E ?ij (xi ,xj ) .
Z
The MAP problem consists of finding the most probable assignment to all variables under p(x; ?).
This
P is equivalent to finding the complete assignment x that maximizes the function f (x; ?) =
ij?E ?ij (xi , xj ). Before describing our formulation of the MAP problem, we first describe the
marginal polytope associated with the MAP problem and its outer bound based on LP relaxation.
Then we discuss the relation of our approach with these polytopes. For details, we refer to [16, 11].
Let ? denote a vector of marginal probabilities (also called mean parameters) for each node and
edge of the MRF. That is, ? includes ?i (xi ) ?i ? V and ?ij (xi , xj ) ?(i, j) ? E. The set of ? that
arises from some joint distribution p is referred to as the marginal polytope:
M(G) = {? | ?p(x) s.t. p(xi , xj ) = ?ij (xi , xj ), p(xi ) = ?i (xi )}
The MAP problem is then equivalent to solving the following LP:
X X
max f (x; ?) = max ? ? ? = max
?ij (xi , xj )?ij (xi , xj )
x
??M(G)
??M(G)
(1)
(2)
ij?E xi xj
It can be shown that there always exists a maximizing solution ? which is integral and gives the
optimal x. Unfortunately, the number of constraints used to describe this polytope are exponential
and thus it cannot be solved efficiently. To remedy this, LP relaxations are proposed that outer bound
the polytope M(G). The relaxation weakens the global constraint that ? arises from some common
distribution p. Instead, only pairwise and singleton consistency is required for mean parameters as
given by the following conditions:
X
X
X
(3)
?i (xi ) = 1 ?i ? V ,
?ij (?
xi , xj ) = ?j (xj ) ,
?ij (xi , x
?j ) = ?i (xi ) ?(i, j) ? E
xi
x
?i
x
?j
The outer bound polytope is expressed as
ML (G) = {? ? 0 | The conditions of Eq. 3 hold}
2
(4)
l=1
x1
x1
x2
l=2
x1
??
??
x4
l=3
x2
x2
x4
x3
l=4
x4
??
x3
??
x3
Mixture of Bayes nets
Figure 1: a) A pairwise Markov random field; b) Equivalent mixture representation
LP relaxation approaches such as MPLP [5, 11] optimize the function ? ? ? over this outer bound
ML (G) and consequently yield an upper bound on the MAP. Next we describe our approach for
estimating the MAP.
Inner bound on the marginal polytope In the definition of marginal polytope M(G) (Eq. 1), no
restrictions are placed on the probability distribution p. Consider the following
of probability
Qclass
n
distributions that factorize according to the variables of the MRF, p0 (x) = i=1 p0i (xi ). This is
similar to the mean field methods used in variational inference [16]. Our approach is to directly
optimize over the following set of mean parameters ?:
Mlb (G) = {? | ?p0 (x) s.t. ?i (xi ) = p0i (xi ) , ?ij (xi , xj ) = p0i (xi )p0j (xj )}
(5)
where p0 is the distribution that factorizes according the variables in the MRF. Clearly Mlb (G) is
an inner bound over M(G), because in M(G) there is no such restriction on the class of allowed
probability distributions. The optimization criterion for estimating MAP under this set is:
X X
max flb (x; ?) = max ? ? ? = max
?ij (xi , xj )?i (xi )?j (xj )
(6)
1
x
??Mlb (G) 1
??Mlb (G)
ij?E xi xj
Let
denote the optimizing value for the above formulation and f ? for the formulation in Eq. 2.
?
?
= f ? . The reason is that there always
? f ? . A simple observation shows that indeed flb
Clearly flb
?
exists a maximizing ? ? M(G) that is integral and thus f corresponds to an integral assignment
x which is also the MAP assignment [16]. Since all the integral assignments are also allowed by
the definition of the factored distribution p0 and Mlb (G), it follows that optimizing over Mlb (G)
?
= f ? and yields the MAP estimate. It is worth noticing that the constraints describing
implies flb
the set Mlb (G) are only linear in the number of nodes in the MRF and correspond to normalization
constraints as opposed to the exponentially large constraint set for M(G).
?
flb
It might appear that we have significantly reduced the space of allowed mean parameters ? while
still preserving the MAP. But the problem still remains challenging. The reduced set of parameters
Mlb (G) is non-convex because of the non-linear constraint ?ij (xi , xj ) = ?i (xi )?j (xj ) [16]. Thus
optimization over Mlb (G) cannot be done using linear programming. To alleviate this problem,
we next present another reformulation of the optimization problem in Eq. 6. Then we present the
Expectation Maximization (EM) algorithm [4] for this reformulation that monotonically increases
the lower bound on the MAP assignment using likelihood maximization until convergence.
3
MAP as a Mixture of Bayes Nets
In this section we reformulate the optimization problem in Eq. 6 and recast it as the problem of
likelihood maximization in a finite-mixture of simple Bayes nets. The key idea is to decompose
the MRF into a mixture of simpler Bayes nets with many hidden variables ? all the variables xi of
the MRF. To incorporate the potential functions ??s of the MRF and achieve equivalence between
the likelihood and MAP value, a special binary reward variable ?? is introduced with its conditional
distribution proportional to potentials ?. The details of the reformulation follow.
For each edge (i, j) in the graph G corresponding to the MRF, we create a depth-1 Bayes net. It
consists of a binary reward variable ?? with its parents being the variables xi and xj . The reason
for calling it a reward variable will become clear later. Fig. 1(a) shows a pairwise MRF over four
variables. Fig. 1(b) shows the equivalent mixture of Bayes nets for each of the four edges in this
MRF. The mixture random variable l, which is used to identify the Bayes nets, can take values from
1 to |E|, the number of edges in the graph, with uniform probability. That is, if k = |E|, then
3
P (l = i) = 1/k for any 1 ? i ? k. In what follows, we will also use the variable l to denote the
corresponding edge in the MRF.
The parameters to estimate in this mixture are the marginal probabilities for each node xi . That is
p = hp1 , . . . , pn i. This step directly establishes the connection with the space of factored probability
distribution p0 of the set Mlb (G) (see Eq. 5).
Next we set the conditional probability distribution of the variable ?? for each of the Bayes nets. This
is done as follows:
?l (xl1 , xl2 ) ? ?min
P (?? = 1|xl1 , xl2 , l) =
(7)
?max ? ?min
where l indicates a particular Bayes net corresponding to an edge of the MRF, xl1 and xl2 are
the parent variables of ?? in this Bayes net and ?l the potential function for this edge. ?max is the
maximum value for any potential function ?, and ?min the minimum value. For example, for l = 1 in
Fig. 1(b), xl1 = x1 , xl2 = x2 and P (?? = 1|x1 , x2 , l = 1) = (?12 (x1 , x2 ) ? ?min )/(?max ? ?min ).
Note that these probabilities are nothing but the normalized potential functions ?ij of the original
MRF. For this reason, ?? is also called a reward variable. It is used to establish the equivalence
between the MAP value and the likelihood of observing ?? = 1.
The full joint for a particular Bayes net indicated by the variable l is given by
? xl , xl |l; p) = P (?|x
? l , xl , l)pl (xl ; p)pl (xl ; p).
P (?,
1
2
1
2
1
1
2
2
(8)
where pl1 is the marginal associated with the variable xl1 . Let us denote the variables (xl1 , xl2 ) by
xl and let ??xl denote the probability P (?? = 1|xl1 , xl2 , l), then the following theorem establishes the
link between the likelihood and MAP value. ?l (xl ) denotes the corresponding potential function ?l
of the MRF, for l = 1 in Fig. 1(b), ?l (xl ) = ?12 (x1 , x2 ).
Theorem 1. Let the CPT of binary reward variable ?? be selected such that ??xl ? ?l (xl ). Then
maximizing the likelihood Lp = P (?? = 1; p) of observing the reward variable in the mixture of
Bayes nets is equivalent to the MAP estimation of the original MRF.
Proof. The likelihood for a single Bayes net is given by
X
X
P (?? = 1, xl1 , xl2 |l; p) =
??xl pl1 (xl1 ; p)pl2 (xl2 ; p).
Lpl = P (?? = 1|l; p) =
For the complete mixture, it is given by
X
1 XX ?
P (l)Lpl =
Lp =
?xl pl1 (xl1 ; p)pl2 (xl2 ; p).
k
x
l
(9)
xl
xl
l
(10)
l
Upon substituting the definition of ??xl from Eq. 7 and using simple algebraic manipulations, we get
XX
?l (xl )pl1 (xl1 ; p)pl2 (xl2 ; p) = k(?min + (?max ? ?min )Lp ).
l
xl
Notice that the LHS of the above equation is the same as the optimization objective in Eq. 6. Thus
we have shown that maximizing the likelihood Lp provides the MAP estimate.
The above equation can also be explained intuitively in the context of goal directed planning. The
? p ?min ), where L
? p = 1 ? Lp . According to this formulation,
RHS can be rewritten as k(Lp ?max + L
there are only two rewards in the system: ?min and ?max . The goal is to achieve the higher reward
?max for each edge in the MRF. Thus maximizing the probability Lp of achieving this goal solves
the optimization problem.
4
EM Algorithm for MAP Estimation
We now derive the EM algorithm [4] for maximizing the likelihood of the reward variable in the
mixture of Bayes nets. In this mixture, only the reward variable is treated as observed (?? = 1); all
4
Algorithm 1: Graph-based message passing for MAP estimation
input : Graph G = (V, E) for the MRF and potentials ? for each edge
repeat
foreach node i ? V do
MPLP: Send message
h ?i?j to each neighbor j ? Ne(i) P
i
2
?i?j (xj ) ? maxxi ?ij (xi , xj ) ? ?j?i (xi ) + |Ne(i)|+1
?
(x
)
k?i
i
k?Ne(i)
P
Set node belief bi (xi ) to the sum of incoming messages: bi (xi ) = k?Ne(i) ?k?i (xi )
EM: Send message ?i?j to each neighbor j ? Ne(i)
P
?i?j (xj ) ? xi pi (xi )??xi xj
Set marginal probability to sum of incoming messages: p?i (xi ) = pi (xi )
until stopping criterion is satisfied
MPLP: Return complete assignment x s.t. xi = argmaxx?i bi (?
xi )
EM : Return complete assignment x s.t. xi = argmaxx?i pi (?
xi )
P
?k?i (xi )
Ci
k?Ne(i)
other variables are latent. We note that EM is not guaranteed to converge to a global optimum. However, our experiments show that EM achieves an average solution quality within 95% of optimal for
the standard MAP benchmark of protein design problems. We also show that the update equations
for EM can be implemented efficiently using graph-based message passing and are computationally
much faster than other message-passing algorithms such as max-product [8] and MPLP [5]. Below,
we derive the update equations for the M-step. The E-step can be directly inferred from that. The
parameters p to estimate are the marginal probabilities pi for each variable xi .
M-step: EM maximizes the following expected complete log-likelihood for the mixture of Bayes
nets. The variable p denotes the previous parameters and p? denotes the new parameters.
XX
P (?? = 1, xl , l; p) log P (?? = 1, xl , l; p? )
Q(p, p? ) =
(11)
xl
l
The full joint is given by:
1?
P (?? = 1, xl , l; p) = P (?? = 1|xl , l)P (xl |l; p)P (l) =
?x pl (xl ; p)pl2 (xl2 ; p)
k l 1 1
We will omit the parameter p whenever the expression is unambiguous. Taking the log, we get:
log P (?? = 1, xl , l; p) = hInd. terms of pi + log pl (xl ) + log pl (xl )
(12)
1
1
2
2
Substituting the above equation into the definition of Q(p, p? ) (Eq. 11) and discarding the terms
which are independent of p? , we get:
1 XX ?
?xl pl1 (xl1 )pl2 (xl2 ){log p?l1 (xl1 ) + log p?l2 (xl2 )}
Q(p, p? ) =
(13)
k
x
l
l
Upon simplifying the above equation by grouping together the terms associated with the variables
xi of the MRF, we get:
n
X X
1 XX
pi (xi ) log p?i (xi )
??xi xj pj (xj )
(14)
Q(p, p? ) =
k i=1 x
x
j?Ne(i)
i
j
where Ne(i) denotes the set of immediate neighbors of the node i in the MRF graph. The above
expression can be easily maximized by maximizing for variables xi ?s individually. The final update
equation for the marginals is given by:
P
P
pi (xi ) j?Ne(i) xj ??xi xj pj (xj )
?
pi (xi ) =
(15)
Ci
where Ci is the normalization constant for variable xi , and ??x x is the normalized reward:
i
j
P (?? = 1|xi , xj , l) = (?ij (xi , xj ) ? ?min )/(?max ? ?min ).
Algorithm 1 shows the graph-based message passing technique for both EM and MPLP. For both
EM and MPLP, parameters are initialized randomly. The rest of the steps are self-explanatory.
5
% of optimal
100
1fpo
95
450
90
400
85
350
80
300
75
250
70
200
150
65
Optimal
UB
AVG-OPT
AVG-UB
60
55
50
N=170, C=3167
0
10
20
30
40 50 60
Instance
70
80
100
50
EM
MPLP
U.B.
0
90 100
-50
0
100
200
1700
3200
4700
Figure 2: a) Quality achieved by EM for all protein design instances. b) Quality for the largest
instance ?1fpo?; x-axis denotes time (in sec.), y-axis denotes the quality achieved.
Complexity analysis and implementation Consider a single message ? sent out by a node in
MPLP. The complexity of computing ? is O(d2 ? deg), where d is the domain size of the variables,
and deg represents the average degree of the graph or the average number of neighbors of a node.
For EM, this complexity is only O(d2 ). Therefore, the computational complexity of each iteration
of EM is lower than that of MPLP by a factor of deg. The same result holds for max-product,
because its message-passing structure is similar to that of MPLP. The average number of neighbors
in dense graphs such as the ones encountered in protein-design problems can be as high as 30. This
makes EM significantly faster than the previous approaches as we demonstrate empirically in the
next section.
EM?s simple message passing scheme also facilitates a very efficient parallel implementation. In
particular, all the ? messages for the current iteration in Alg. 1 can be computed in parallel for each
node i, because they depend only on the parameters from the previous iteration. In contrast, MPLP
follows a block coordinate descent strategy in which optimization is performed over a subset of
variables, keeping all the other variables fixed [5]. Therefore, opportunities for parallelism in the
current implementations of MPLP are limited.
5
Experiments
Our first set of experiments are on the protein design problems (total of 97 instances), which are
described in [17]. In these problems, given a desired backbone structure of the protein, the task is to
find a sequence of amino-acids that is as stable as possible or has the lowest energy. This problem can
be represented as finding the MAP configuration in an MRF. These problems are particularly hard
and dense with up to 170 variables, each having a large domain size of up to 150 values. We compare
performance with the MPLP algorithm as described in [5, 11] and with max-product [8]. We used
the standard setting for MPLP ? first it is run with edge based clusters for 1000 iterations [5] and
then clusters of size 3 are added to tighten the LP relaxation [11]. EM was implemented in JAVA. To
speedup the convergence of EM, we used a simple modification of the M-step as described in [12].
All our experiments were done on a Mac Pro with dual quad-core processor and 4GB RAM. All
algorithms used only a single processor for computation. We note that another clustering-based
improvement of MPLP is presented in [9]. Such clusters can be similarly incorporated into the EM
algorithm, which currently does not use any clusters. Therefore, comparisons with such clustering
techniques are left for future work.
The main purpose of our experiments is to show that EM achieves high solution quality, much more
quickly than MPLP or max-product. Therefore EM provides a good alternative, particularly when
fast near-optimal solutions are desired. As reported in [11], for protein design problems solved
exactly, mean running time was 9.7 hours. For all the problems, instead of running MPLP until the
near-optimal solution is found, we used a fixed cutoff of 5000 sec. For all the problems, we ran EM
and max-product for 1500 iterations. For EM, different runs were initialized randomly and the best
of 5-runs is plotted. Empirically, EM achieves a solution quality within 95% of optimal on average
much faster than MPLP. The longest time EM took for any protein design instance was 352 sec. for
the ?1fpo? instance (Fig. 2(b)).
6
1fs1
N=114, C=2262
1gef
N=119, C=2408
1iib
300
300
300
250
250
250
200
200
200
150
150
150
100
100
100
50
50
EM
MPLP
U.B.
0
-50
0
100
200
1on2
1700
3200
50
EM
MPLP
U.B.
0
4700
-50
0
N=135, C=2547
100
200
1tmy
1700
3200
4700
0
N=117, C=2720
100
200
2phy
300
300
300
250
250
200
200
150
150
100
100
200
EM
MPLP
U.B.
0
-50
350
250
N=103, C=2235
1700
3200
4700
N=124, C=2753
150
100
50
50
EM
MPLP
U.B.
0
-50
0
100
200
1700
3200
50
EM
MPLP
U.B.
0
4700
-50
0
100
200
1700
3200
EM
MPLP
U.B.
0
4700
-50
0
100
200
1700
3200
4700
Figure 3: Quality comparison with MPLP for six of the largest protein design instances. The x-axis
denotes time (in sec.) and the y-axis denotes the quality achieved.
Fig. 2(a) shows the solution quality EM achieves for all the instances in 1500 iterations. Since a tight
upper bound for all the problems except the instance ?1fpo? is known [11], we show the percentage
of optimal EM achieves. The legend titled ?Optimal? in Fig. 2(a) shows this value. For the unsolved
instance ?1fpo?, we use the best known upper bound MPLP achieved in 10 hours (? 434). As it is
clear from this graph, EM achieves near-optimal solution quality for all the instances, within 95%
on average. To show empirically that MPLP decreases the upper bound quickly, we also show the
percentage of solution quality EM achieves when instead of using the best known upper bound, we
use the upper bound provided by MPLP after 1,000 iterations. The legend in Fig. 2(a) titled ?UB?
shows this percentage. Even using this bound, EM achieves a quality within 91% on average (legend
?AVG-UB?). This further suggests that combining EM?s ability to rapidly increase the lower bound
and MPLP?s ability to decrease the upper bound quickly, is a good way to create a hybrid approach
that can provide provably near-optimal solutions much faster.
Fig. 2(b) shows the quality achieved by EM and MPLP as a function of time for the largest instance
?1fpo?. To show the convergence curve of EM clearly, the plot uses a different scale for time T ? 200
and the rest. This graph also shows that EM provides a much better solution quality, much faster
than the MPLP. Legend ?U.B? denotes the best known upper bound. Empirically, we noticed that
the main advantage of the EM approach was for problems which were large in size having many
variables. For smaller problems, EM and MPLP were comparable in performance. Fig. 3 shows
the quality comparisons with time for some of the largest protein design instances. Each graph title
shows the instance name, N denotes the number of variables in the MRF, C denotes the number
of potential functions ? or edges in the graph. For all these problems, EM provided near-optimal
solution quality and is significantly faster than MPLP.
We also compared EM with max-product. Table 1 shows this comparison for some of the largest
protein design instances. In this table, MP Quality denotes the best quality max-product achieved in
1500 iterations, Time/Iteration denotes the time and iteration number when it was achieved for the
first time. Again, EM outperforms max-product by a significant margin, achieving a higher solution
quality much faster. For some of the problems, such as ?1tmy? and ?1or7?, the quality achieved
by EM was much higher. Also, for none of these problems max-product converged. This may
be due to the fact that these are highly constrained problems with many cycles in the graph. The
average degree of a node for these problems is very high, e.g., ? 37 for ?1fpo?. The time required
per iteration of max-product was 11 sec. for this instance. Therefore the predicted time per EM?s
iteration is 11/37 ? 0.298 sec.; the actual time for EM was 0.235 sec. The same result holds for
other instances as well. This is consistent with the complexity analysis in Sec. 4.
We also tested EM on the protein prediction problems [17, 11] which are simpler and sparser than
the protein design problems. The LP relaxations in this case can be solved even by the standard
LP solvers unlike the protein design problems [17]. Surprisingly, EM does not work well on these
7
Instance
1fs1
1gef
1bkb
1iib
1on2
1tmy
1or7
1fpo
MP Quality
268.3
239.8
272.4
236.2
314.7
202.9
368.1
406.2
Time/Iteration
3628.5/344
13938.9/1331
10928.9/965
11493.1/1099
11628.34/1226
99.5/8
234.9/22
9072.7/791
EM Quality
267.6
267.3
288.2
245.7
317.1
264.8
410.2
407.1
Time/Iteration
202.3/1220
71.6/428
250.1/1462
78.1/442
146.6/807
222.4/1067
240.8/1087
263.6/1125
U.B.
276.4
276.1
292.8
251.1
327.23
272.1
419.3
434
Table 1: Solution quality and time (in sec.) comparison between EM and Max-Product (MP). U.B.
denotes the best known upper bound.
problems. For the hardest instance ?1a8i? (812 variables, 10124 edges, edge density=.03), MPLP
achieves the near optimal value of 73, whereas EM could only achieve a value of ?374. The reason
for this lies in the reward structure of the problem or the values ?min and ?max . For this problem
?min = ?5770.96 and ?max = 3.88. As shown earlier, EM works with the normalized rewards
assigning 0 to the minimum reward ?5770.96, and 1 to the maximum reward 3.88. This dramatic
scaling of the reward is particularly problematic for EM as shown below.
According to Thm. 1, the log-likelihood EM converges to is ?2.949 ? 10?4 . For EM to achieve the
value 73, the log-likelihood should be ?2.913 ? 10?4 . However the drastic scaling of the reward
causes this minor difference to significantly affect solution quality. In such settings, EM may not
work well. In contrast, for the largest protein design instance ?1fpo?, the minimum reward is ?59.2
and maximum is 4.37. We also experimented on a 10?10 grid graph with 5 values per variable using
the Potts model similar to [5]. We randomly generated 100 instances and found that EM achieved
good solution quality, within 95% of optimal on average. The difference between the maximum and
minimum reward in these problems was less than 5, with a typical setting: ?min ? ?2.5, ?max ? 2.5.
This further supports the previous analysis.
6
Conclusion
A number of techniques have been developed recently to find the MAP assignment of Markov random fields. Particularly successful are approaches based on LP relaxation of the MAP problem
such as MPLP. Such approaches minimize an upper bound relatively quickly, but take much longer
to find a good solution. In contrast, our proposed formulation seeks to provide good quality solutions quickly by directly maximizing a lower bound on the MAP value over the inner bound on the
marginal polytope. The proposed Expectation Maximization (EM) algorithm increases this lower
bound monotonically by likelihood maximization and is guaranteed to converge. Furthermore, EM?s
update equations can be efficiently implemented using a graph-based message passing paradigm.
Although EM may get stuck at a local optimum, our empirical results on the protein design dataset
show that EM performs very well, producing solutions within 95% of optimal on average. EM
achieves such high solution quality significantly faster than MPLP or max-product for many large
protein design problems. Another significant advantage EM enjoys is the ease of parallelization.
Using advanced parallel computing paradigms such as Google?s MapReduce [2] can further speedup
the algorithm with little additional effort. Finally, we examined a setting in which EM may not
work well due to a large gap between the minimum and maximum reward. Our ongoing efforts
include incorporating some of the advanced clustering techniques based on LP relaxation of the
MAP problem with the EM method, and designing heuristics that can help EM avoid getting stuck
in local optima for problems with large variations in the reward structure.
7
Acknowledgment
We thank anonymous reviewers for their helpful suggestions. Support for this work was provided in
part by the National Science Foundation Grant IIS-0812149 and by the Air Force Office of Scientific
Research Grant FA9550-08-1-0181.
8
References
[1] H. Attias. Planning by probabilistic inference. In Proc. of the 9th Int. Workshop on Artificial Intelligence
and Statistics, 2003.
[2] J. Dean and S. Ghemawat. MapReduce: a flexible data processing tool. Communications of the ACM,
53(1):72?77, 2010.
[3] R. Dechter. Constraint Processing. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2003.
[4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical society, Series B, 39(1):1?38, 1977.
[5] A. Globerson and T. Jaakkola. Fixing Max-Product: Convergent message passing algorithms for MAP
LP-relaxations. In Advances in Neural Information Processing Systems, 2007.
[6] K. Jung, P. Kohli, and D. Shah. Local rules for global MAP: When do they work? In Advances in Neural
Information Processing Systems, 2009.
[7] C. D. Manning and H. Sch?utze. Foundations of statistical natural language processing. MIT Press,
Cambridge, MA, USA, 1999.
[8] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann Publishers Inc., 1988.
[9] D. Sontag, A. Globerson, and T. Jaakkola. Clusters and coarse partitions in LP relaxations. In Advances
in Neural Information Processing Systems, pages 1537?1544, 2008.
[10] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In Advances in Neural Information Processing Systems, 2007.
[11] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations for MAP using
message passing. In Proc. of Uncertainty in Artificial Intelligence, pages 503?510, 2008.
[12] M. Toussaint, L. Charlin, and P. Poupart. Hierarchical POMDP controller optimization by likelihood
maximization. In Proc. of Uncertainty in Artificial Intelligence, pages 562?570, 2008.
[13] M. Toussaint, S. Harmeling, and A. Storkey. Probabilistic inference for solving (PO)MDPs. Technical
Report EDIINF-RR-0934, University of Edinburgh, School of Informatics, 2006.
[14] M. Toussaint and A. J. Storkey. Probabilistic inference for solving discrete and continuous state markov
decision processes. In Proc. of the International Conference on Machine Learning, pages 945?952, 2006.
[15] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on (hyper)trees: Messagepassing and linear programming approaches. IEEE Transactions on Information Theory, 51:3697?3717,
2002.
[16] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[17] C. Yanover, T. Meltzer, Y. Weiss, P. Bennett, and E. Parrado-hernndez. Linear programming relaxations
and belief propagation ? an empirical study. Journal of Machine Learning Research, 7:2006, 2006.
9
| 4165 |@word kohli:1 a8i:1 d2:2 seek:1 simplifying:1 p0:5 dramatic:1 phy:1 configuration:1 series:1 uma:2 outperforms:1 current:2 assigning:1 dechter:1 partition:1 shlomo:2 plot:1 update:5 intelligence:3 selected:1 core:1 fa9550:1 certificate:1 provides:4 node:13 coarse:1 simpler:3 become:1 consists:2 pairwise:4 expected:1 indeed:1 planning:4 inspired:1 globally:1 pitfall:1 actual:1 quad:1 little:1 solver:2 provided:3 estimating:2 underlying:1 bounded:2 maximizes:2 xx:5 lowest:1 what:1 backbone:1 prohibits:1 developed:2 finding:5 exactly:1 grant:2 omit:1 appear:1 producing:1 before:1 local:4 might:1 fs1:2 examined:1 equivalence:2 suggests:1 challenging:1 ease:1 limited:1 bi:3 directed:2 practical:2 acknowledgment:1 globerson:3 harmeling:1 block:1 x3:3 empirical:2 significantly:8 java:1 protein:23 get:5 cannot:2 close:1 context:1 optimize:2 equivalent:6 map:48 restriction:2 reviewer:1 maximizing:9 send:2 dean:1 convex:1 pomdp:1 factored:2 insight:1 rule:1 coordinate:1 variation:1 programming:5 us:1 designing:1 agreement:1 storkey:2 trend:1 particularly:5 iib:2 observed:2 solved:3 connected:1 cycle:1 decrease:4 ran:1 dempster:1 complexity:5 reward:21 depend:1 solving:4 tight:1 upon:2 tmy:3 easily:2 joint:5 po:1 represented:1 fast:1 effective:1 describe:3 artificial:3 hyper:1 emerged:1 heuristic:1 solve:1 relax:1 ability:3 statistic:1 itself:1 laird:1 final:1 advantage:4 sequence:1 rr:1 net:17 took:1 interaction:1 product:18 combining:1 rapidly:3 achieve:4 lpl:2 scalability:1 getting:1 convergence:9 parent:2 optimum:5 cluster:5 converges:1 help:1 derive:4 weakens:1 fixing:1 ij:19 school:1 minor:1 eq:9 solves:1 implemented:4 c:2 predicted:1 treewidth:1 implies:1 attribute:1 on2:2 alleviate:2 decompose:1 opt:1 probable:2 anonymous:1 pl:5 hold:3 sufficiently:1 substituting:2 achieves:12 utze:1 purpose:1 estimation:7 proc:4 currently:1 title:1 individually:1 largest:7 create:2 successfully:1 establishes:2 tool:1 mit:1 clearly:3 always:2 pn:1 avoid:1 factorizes:1 jaakkola:5 office:1 zilberstein:1 focus:1 improvement:1 longest:1 potts:1 likelihood:17 indicates:1 underscore:1 contrast:3 posteriori:2 inference:10 helpful:1 mrfs:2 stopping:1 explanatory:1 hidden:1 relation:1 bkb:1 provably:2 issue:1 dual:2 flexible:1 development:1 constrained:1 special:2 marginal:11 field:7 having:2 x4:3 represents:1 hardest:1 future:1 np:1 report:1 intelligent:1 inherent:1 randomly:3 densely:1 national:1 consisting:1 message:18 highly:1 mixture:16 chain:1 integral:6 edge:15 lh:1 tree:2 incomplete:1 initialized:2 desired:2 plotted:1 instance:22 earlier:1 assignment:16 maximization:8 mac:1 subset:1 uniform:1 successful:1 reported:1 connect:1 combined:1 density:1 international:1 amherst:2 probabilistic:6 informatics:1 together:1 quickly:6 again:1 satisfied:1 opposed:1 return:2 potential:10 singleton:1 sec:9 includes:1 int:1 inc:2 mp:4 later:1 performed:1 observing:2 bayes:17 parallel:4 minimize:1 air:1 acid:1 kaufmann:2 efficiently:3 maximized:1 yield:3 correspond:1 identify:1 conceptually:1 none:1 worth:1 processor:2 converged:1 whenever:1 definition:4 energy:1 naturally:1 associated:3 proof:1 unsolved:1 dataset:2 massachusetts:2 popular:1 embarrassingly:1 higher:4 follow:1 wei:2 formulation:6 done:3 charlin:1 furthermore:1 until:5 propagation:1 google:1 quality:31 indicated:1 scientific:1 name:1 usa:2 normalized:3 remedy:1 self:1 unambiguous:1 criterion:2 complete:5 demonstrate:1 performs:1 l1:1 pro:1 reasoning:1 image:1 variational:3 recently:2 common:1 empirically:6 exponentially:1 foreach:1 marginals:1 refer:1 significant:2 cambridge:1 consistency:1 grid:1 similarly:1 ongoing:1 language:2 mlb:10 stable:1 longer:1 recent:1 pl1:5 optimizing:2 manipulation:1 binary:3 preserving:1 minimum:5 additional:1 morgan:2 parallelized:1 converge:2 paradigm:3 monotonically:5 ii:1 full:2 technical:1 faster:11 prediction:3 scalable:1 mrf:23 controller:1 vision:1 expectation:3 iteration:15 normalization:2 p0i:3 achieved:9 whereas:1 remarkably:1 crucial:1 publisher:2 parallelization:1 rest:2 unlike:1 sch:1 undirected:2 sent:1 facilitates:1 legend:4 jordan:1 near:6 meltzer:2 xj:30 affect:1 inner:3 idea:1 attias:1 expression:2 six:1 gb:1 effort:2 titled:2 algebraic:1 sontag:3 passing:12 cause:1 cpt:1 clear:2 transforms:1 reduced:2 specifies:1 percentage:3 problematic:1 notice:1 per:4 discrete:1 key:2 four:2 reformulation:3 achieving:2 fpo:9 pj:2 cutoff:1 ram:1 graph:24 relaxation:16 pl2:5 sum:2 run:3 noticing:1 uncertainty:2 family:1 decision:1 scaling:2 comparable:1 bound:29 guaranteed:4 convergent:3 encountered:1 constraint:8 x2:7 calling:1 optimality:1 min:14 kumar:1 relatively:1 hind:1 speedup:4 department:2 structured:1 developing:1 alternate:2 according:4 manning:1 smaller:1 em:84 lp:26 modification:1 explained:1 intuitively:1 computationally:2 equation:9 remains:1 discus:2 describing:2 drastic:1 decomposing:1 rewritten:1 hierarchical:1 alternative:2 shah:1 original:2 denotes:14 clustering:3 running:2 include:1 graphical:4 opportunity:1 establish:1 approximating:1 society:1 objective:2 noticed:1 added:1 strategy:1 lends:1 link:1 thank:1 mplp:40 outer:5 poupart:1 polytope:9 reason:4 willsky:1 modeled:1 reformulate:1 unfortunately:1 tightening:1 design:20 implementation:4 perform:1 upper:13 observation:1 markov:6 benchmark:1 finite:2 descent:1 immediate:1 incorporated:1 communication:1 thm:1 inferred:1 introduced:2 pair:1 required:2 specified:1 connection:1 polytopes:1 hour:2 pearl:1 hp1:1 below:2 parallelism:1 gef:2 recast:1 max:34 royal:1 belief:2 wainwright:2 natural:2 hybrid:2 treated:1 force:1 advanced:2 yanover:1 scheme:1 mdps:1 ne:9 axis:4 l2:1 mapreduce:2 highlight:1 suggestion:1 proportional:1 toussaint:3 foundation:3 degree:3 consistent:1 rubin:1 pi:8 jung:1 placed:1 repeat:1 keeping:1 surprisingly:1 enjoys:1 side:1 neighbor:5 taking:1 edinburgh:1 curve:1 depth:1 xn:1 world:1 stuck:2 xl2:13 avg:3 san:1 tighten:1 transaction:1 approximate:1 clique:1 ml:2 global:5 deg:3 incoming:2 p0j:1 francisco:1 xi:57 factorize:1 continuous:1 latent:1 parrado:1 table:3 ca:1 messagepassing:1 argmaxx:2 alg:1 complex:1 domain:3 main:4 dense:2 rh:1 nothing:1 allowed:3 amino:1 x1:8 fig:10 referred:2 decoded:1 exponential:2 xl:29 lie:3 maxxi:1 theorem:2 discarding:1 ghemawat:1 experimented:1 grouping:1 exists:2 incorporating:1 workshop:1 ci:3 margin:1 sparser:1 gap:1 expressed:1 xl1:13 corresponds:1 acm:1 ma:3 conditional:2 goal:4 consequently:1 bennett:1 hard:2 typical:1 except:2 called:3 total:1 support:2 arises:2 ub:4 akshat:2 incorporate:1 tested:1 |
3,496 | 4,166 | Parametric Bandits:
The Generalized Linear Case
Olivier Capp?e
LTCI
Telecom ParisTech et CNRS
Paris, France
[email protected]
Sarah Filippi
LTCI
Telecom ParisTech et CNRS
Paris, France
[email protected]
Aur?elien Garivier
LTCI
Telecom ParisTech et CNRS
Paris, France
[email protected]
Csaba Szepesv?ari
RLAI Laboratory
University of Alberta
Edmonton, Canada
[email protected]
Abstract
We consider structured multi-armed bandit problems based on the Generalized
Linear Model (GLM) framework of statistics. For these bandits, we propose a new
algorithm, called GLM-UCB. We derive finite time, high probability bounds on
the regret of the algorithm, extending previous analyses developed for the linear
bandits to the non-linear case. The analysis highlights a key difficulty in generalizing linear bandit algorithms to the non-linear case, which is solved in GLM-UCB
by focusing on the reward space rather than on the parameter space. Moreover, as
the actual effectiveness of current parameterized bandit algorithms is often poor in
practice, we provide a tuning method based on asymptotic arguments, which leads
to significantly better practical performance. We present two numerical experiments on real-world data that illustrate the potential of the GLM-UCB approach.
Keywords: multi-armed bandit, parametric bandits, generalized linear models,
UCB, regret minimization.
1
Introduction
In the classical K-armed bandit problem, an agent selects at each time step one of the K arms and
receives a reward that depends on the chosen action. The aim of the agent is to choose the sequence
of arms to be played so as to maximize the cumulated reward. There is a fundamental trade-off
between gathering experimental data about the reward distribution (exploration) and exploiting the
arm which seems to be the most promising.
In the basic multi-armed bandit problem, also called the independent bandits problem, the
rewards are assumed to be random and distributed independently according to a probability
distribution that is specific to each arm ?see [1, 2, 3, 4] and references therein. Recently, structured
bandit problems in which the distributions of the rewards pertaining to each arm are connected
by a common unknown parameter have received much attention [5, 6, 7, 8, 9]. This model is
motivated by the many practical applications where the number of arms is large, but the payoffs are
interrelated. Up to know, two different models were studied in the literature along these lines. In
one model, in each times step, a side-information, or context, is given to the agent first. The payoffs
of the arms depend both on this side information and the index of the arm. Thus the optimal arm
changes with the context [5, 6, 9]. In the second, simpler model, that we are also interested in here,
there is no side-information, but the agent is given a model that describes the possible relations
1
between the arms? payoffs. In particular, in ?linear bandits? [10, 8, 11, 12], each arm a ? A is
associated with some d-dimensional vector ma ? Rd known to the agent. The expected payoffs
of the arms are given by the inner product of their associated vector and some fixed, but initially
unknown parameter vector ?? . Thus, the expected payoff of arm a is m?a ?? , which is linear in ?? .1
In this article, we study a richer generalized linear model (GLM) in which the expectation of
the reward conditionally to the action a is given by ?(m?a ?? ), where ? is a real-valued, non-linear
function called the (inverse) link function. This generalization allows to consider a wider class
of problems, and in particular cases where the rewards are counts or binary variables using,
respectively, Poisson or logistic regression. Obviously, this situation is very common in the fields of
marketing, social networking, web-mining (see example of Section 5.2 below) or clinical studies.
Our first contribution is an ?optimistic? algorithm, termed GLM-UCB, inspired by the Upper Confidence Bound (UCB) approach [2]. GLM-UCB generalizes the algorithms studied by [10, 8, 12].
Our next contribution are finite-time bounds on the statistical performance of this algorithm. In
particular, we show that the performance depends on the dimension of the parameter but not on the
number of arms, a result that was previously known in the linear case. Interestingly, the GLM-UCB
approach takes advantage of the particular structure of the parameter estimate of generalized linear
models and operates only in the reward space. In contrast, the parameter-space confidence region
approach adopted by [8, 12] appears to be harder to generalize to non-linear regression models.
Our second contribution is a tuning method based on asymptotic arguments. This contribution
addresses the poor empirical performance of the current algorithms that we have observed for small
or moderate sample-sizes when these algorithms are tuned based on finite-sample bounds.
The paper is organized as follows. The generalized linear bandit model is presented in Section 2,
together with a brief survey of needed statistical results. Section 3 is devoted to the description
of the GLM-UCB algorithm, which is compared to related approaches. Section 4 presents our
regret bounds, as well as a discussion, based on asymptotic arguments, on the optimal tuning of the
method. Section 5 reports the results of two experiments on real data sets.
2
Generalized Linear Bandits, Generalized Linear Models
We consider a structured bandit model with a finite, but possibly very large, number of arms. At
each time t, the agent chooses an arm At from the set A (we shall denote the cardinality of A by K).
The prior knowledge available to the agent consists of a collection of vectors {ma }a?A of features
which are specific to each arm and a so-called (inverse) link function ? : R ? R.
The generalized linear bandit model investigated in this work is based on the assumption that
the payoff Rt received at time t is conditionally independent of the past payoffs and choices and it
satisfies
(1)
E [ Rt | At ] = ?(m?At ?? ) ,
for some unknown parameter vector ?? ? Rd . This framework generalizes the linear bandit model
considered by [10, 8, 12]. Just like the linear bandit model builds on linear regression, our model
capitalizes on the well-known statistical framework of Generalized Linear Models (GLMs). The
advantage of this framework is that it allows to address various, specific reward structures widely
found in applications. For example, when rewards are binary-valued, a suitable choice of ? is
?(x) = exp(x)/(1 + exp(x)), leading to the logistic regression model. For integer valued rewards,
the choice ?(x) = exp(x) leads to the Poisson regression model. This can be easily extended to the
case of multinomial (or polytomic) logistic regression, which is appropriate to model situations in
which the rewards are associated with categorical variables.
To keep this article self-contained, we briefly review the main properties of GLMs [13]. A
univariate probability distribution is said to belong to a canonical exponential family if its density
with respect to a reference measure is given by
p? (r) = exp (r? ? b(?) + c(r)) ,
(2)
where ? is a real parameter, c(?) is a real function and the function b(?) is assumed to be twice
continuously differentiable. This family contains the Gaussian and Gamma distributions when the
reference measure is the Lebesgue measure and the Poisson and Bernoulli distributions when the
1
Throughout the paper we use the prime to denote transposition.
2
reference measure is the counting measure on the integers. For a random variable R with density
?
defined in (2), E(R) = b(?)
and Var(R) = ?b(?), where b? and ?b denote, respectively, the first and
second derivatives of b. In addition, ?b(?) can also be shown to be equal to the Fisher information
matrix for the parameter ?. The function b is thus strictly convex.
Now, assume that, in addition to the response variable R, we have at hand a vector of covariates
X ? Rd . The canonical GLM associated to (2) postulates that p? (r|x) = px? ? (r), where ? ? Rd
is a vector of parameter. Denote by ? = b? the so-called inverse link function. From the properties
of b, we know that ? is continuously differentiable, strictly increasing, and thus one-to-one. The
maximum likelihood estimator ??t , based on observations (R1 , X1 ), . . . (Rt?1 , Xt?1 ), is defined as
the maximizer of the function
t?1
X
k=1
log p? (Rk |Xk ) =
t?1
X
k=1
Rk Xk? ? ? b(Xk? ?) + c(Rk ) ,
a strictly concave function in ?. Upon differentiating, we obtain that ??t is the unique solution of
the following estimating equation
2
t?1
X
k=1
(Rk ? ?(Xk? ?)) Xk = 0 ,
(3)
? In practice, the solution of (3) may be found efficiently
where we have used the fact that ? = b.
using, for instance, Newton?s algorithm.
A semi-parametric version of the above model is obtained by assuming only that E? [R|X] =
?(X ? ?) without (much) further assumptions on the conditional distribution of R given X. In this
case, the estimator obtained by solving (3) is referred to as the maximum quasi-likelihood estimator.
It is a remarkable fact that this estimator is consistent under very general assumptions as long as the
Pt?1
design matrix k=1 Xk Xk? tends to infinity [14]. As we will see, this matrix also plays a crucial
role in the algorithm that we propose for bandit optimization in the generalized linear bandit model.
3
The GLM-UCB Algorithm
According to (1), the agent receives, upon playing arm a, a random reward whose expected value is
?(m?a ?? ), where ?? ? ? is the unknown parameter. The parameter set ? is an arbitrary closed subset
of Rd . Any arm with largest expected reward is called optimal. The aim of the agent is to quickly find
an optimal arm in order to maximize the received rewards. The greedy action argmaxa?A ?(m?a ??t )
may lead to an unreliable algorithm which does not sufficiently explore to guarantee the selection of
an optimal arm. This issue can be addressed by resorting to an ?optimistic approach?. As described
by [8, 12] in the linear case, an optimistic algorithm consists in selecting, at time t, the arm
At = argmax max E? [ Rt | At = a] s.t. k? ? ??t kMt ? ?(t) ,
a
?
(4)
where ? is an appropriate, ?slowly increasing? function,
Mt =
t?1
X
mAk m?Ak
(5)
k=1
?
is the design matrix corresponding to the first t ? 1 timesteps and kvkM = v ? M v denotes the
matrix norm induced by the positive semidefinite matrix M . The region k? ? ??t kMt ? ?(t) is
a confidence ellipsoid around the estimated parameter ??t . Generalizing this approach beyond the
case of linear link functions looks challenging. In particular, in GLMs, the relevant confidence
regions may have a more complicated geometry in the parameter space than simple ellipsoids. As
a consequence, the benefits of this form of optimistic algorithms appears dubious.3
2
Here, and in what follows log denotes the natural logarithm.
Note that maximizing ?(m?a ?) over a convex confidence region is equivalent to maximizing m?a ? over the
same region since ? is strictly increasing. Thus, computationally, this approach is not more difficult than it is
for the linear case.
3
3
An alternative approach consists in directly determining an upper confidence bound for the
expected reward of each arm, thus choosing the action a that maximizes
E??t [ Rt | At = a] + ?(t)kma kM ?1 .
t
In the linear case the two approaches lead to the same solution [12]. Interestingly, for non-linear
bandits, the second approach looks more appropriate.
In the rest of this section, we apply this second approach to the GLM bandit model defined in (1).
According to (3), the maximum quasi-likelihood estimator of the parameter in the GLM is the
unique solution of the estimating equation
t?1
X
Rk ? ?(m?Ak ??t ) mAk = 0 ,
(6)
k=1
where A1 , . . . , At?1 denote the arms played so far and R1 , . . . , Rt?1 are the corresponding rewards.
Pt?1
Let gt (?) = k=1 ?(m?Ak ?)mAk be the invertible function such that the estimated parameter ??t
P
?
satisfies gt (??t ) = t?1
k=1 Rk mAk . Since ?t might be outside of the set of admissible parameters ?,
?
we ?project it? to ?, to obtain ?t :
??t = argmin
gt (?) ? gt (??t )
???
Mt?1
t?1
X
= argmin
gt (?) ?
Rk mAk
???
k=1
Mt?1
.
(7)
Note that if ??t ? ? (which is easy to check and which happened to hold always in the examples we
dealt with) then we can let ??t = ??t . This is important since computing ??t is non-trivial and we can
save this computation by this simple check. The proposed algorithm, GLM-UCB, is as follows:
Algorithm 1 GLM-UCB
1: Input: {ma }a?A
2: Play actions a1 , . . . , ad , receive R1 , . . . , Rd .
3: for t > d do
4:
Estimate ??t according to (6)
?
5:
if ??t ? ? let ??t = ??t else compute
o
n ?t according to (7)
6:
Play the action At = argmaxa ?(m?a ??t ) + ?(t)kma kM ?1 , receive Rt
t
7: end for
At time t, for each arm a, an upper bound ?(m?a ??t ) + ?ta is computed, where the ?exploration
bonus? ?ta = ?(t)kma kM ?1 is the product of two terms. The quantity ?(t) is a slowly increasing
t
function; we prove in Section 4 that ?(t) can be set to guarantee high-probability bounds on the
expected regret (for the actual form used, see (8)). Note that the leading term of ?ta is kma kM ?1
t
which decreases to zero as t increases.
As we are mostly interested in the case when the number of arms K is much larger than the
dimension d, the algorithm is simply initialized by playing actions a1 , . . . , ad such that the vectors
ma1 . . . , mad form a basis of M = span(ma , a ? A). Without loss of generality, here and in what
follows we assume that the dimension of M is equal to d. Then, by playing a1 , . . . , ad in the first
d steps the agent ensures that Mt is invertible for all t. An alternative strategy would be to initialize
M0 = ?0 I, where I is the d ? d identify matrix.
3.1 Discussion
The purpose of this section is to discuss some properties of Algorithm 1, and in particular the
interpretation of the role played by kma kM ?1 .
t
Generalizing UCB The standard UCB algorithm for K arms [2] can be seen as a special case of
GLM-UCB where the vectors of covariates associated with the arms form an orthogonal system and
?(x) = x. Indeed, take d = K, A = {1, . . . , K}, define the vectors {ma }a?A as the canonical basis
{ea }a?A of Rd , and take ? ? Rd the vector whose component ?a is the expected reward for arm a.
4
Then, Mt is a diagonal matrix whose a-th diagonal element is the number Nt (a) of times the
a-th arm haspbeen played up to time t. Therefore, the exploration bonus in GLM-UCB is given by
? a = ??t (a)
?ta = ?(t)/ Nt (a). Moreover, the maximum quasi-likelihood estimator ??t satisfies R
t
P
t?1
?a = 1
I
R
is
the
empirical
mean
of
the
rewards
received
for all a ? A, where R
k
{A
=a}
t
t
k=1
Nt (a)
while playing arm a. Algorithm 1 then reduces to the familiar UCB algorithm. In this case, it
is known that the expected
cumulated regret can be controlled upon setting the slowly varying
p
function ? to ?(t) = 2 log(t), assuming that the range of the rewards is bounded by one [2].
Generalizing linear bandits Obviously, setting ?(x) = x, we obtain a linear bandit model. In
this case, assuming that ? = Rd , the algorithm will reduce to those described in the papers [8, 12].
In particular, the maximum quasi-likelihood estimator becomes the least-squares estimator and as
noted earlier, the algorithm behaves identically to one which chooses the parameter optimistically
within the confidence ellipsoid {? : k? ? ??t kMt ? ?(t)}.
Dependence in the Number of Arms In contrast to an algorithm such as UCB, Algorithm 1
does not need that all arms be played even once.4 To understand this phenomenon, observe that,
2
2
2
2
(1 + kmAt kM ?1 ) for
as Mt+1 = Mt + mAt m?At , kma kM ?1 = kma kM ?1 ? m?a Mt?1 mAt
t
t
t+1
a
any arm a. Thus the exploration bonus ?t+1
decreases for all arms, except those which are exactly
orthogonal to mAt (in the Mt?1 metric). The decrease is most significant for arms that are colinear
to mAt . This explains why the regret bounds obtained in Theorems 1 and 2 below depend on d but
not on K.
4
Theoretical analysis
In this section we first give our finite sample regret bounds and then show how the algorithm can be
tuned based on asymptotic arguments.
4.1 Regret Bounds
To quantify the performance of the GLM-UCB algorithm, we consider the cumulated (pseudo)
regret defined as the expected difference between the optimal reward obtained by always playing
an optimal arm and the reward received following the algorithm:
RegretT =
T
X
t=1
?(m?a? ?? ) ? ?(m?At ?? ) .
For the sake of the analysis, in this section we shall assume that the following assumptions hold:
Assumption 1. The link function ? : R ? R is continuously differentiable, Lipschitz with constant
k? and such that c? = inf ???,a?A ?(m
? ?a ?) > 0.
For the logistic function k? = 1/4, while the value of c? depends on sup???,a?A |m?a ?|.
Assumption 2. The norm of covariates in {ma : a ? A} is bounded: there exists cm < ? such
that for all a ? A, kma k2 ? cm .
Finally, we make the following assumption on the rewards:
Assumption 3. There exists Rmax > 0 such that for any t ? 1, 0 ? Rt ? Rmax holds a.s. Let
?t = Rt ? ?(m?At ?? ). For all t ? 1, it holds that E [?t |mAt , ?t?1 , . . . , mA2 , ?1 , mA1 ] = 0 a.s.
As for the standard UCB algorithm, the regret can be analyzed in terms of the difference between
the expected reward received playing an optimal arm and that of the best sub-optimal arm:
?(?? ) =
min
a:?(m?a ?? )<?(m?a? ?? )
?(m?a? ?? ) ? ?(m?a ?? ) .
Theorem 1 establishes a high probability bound on the regret underlying using GLM-UCB with
2k? ?Rmax p
?(t) =
2d log(t) log(2 d T /?) ,
(8)
c?
4
Of course, the linear bandit algorithms also share this property with our algorithm.
5
p
where T is the fixed time horizon, ? =
3 + 2 log(1 + 2c2m /?0 ) and ?0 denotes the smallest
Pd
?
eigenvalue of i=1 mai mai , which by our previous assumption is positive.
Theorem 1 (Problem Dependent Upper Bound). Let s = max(1, c2m /?0 ). Then, under Assumptions
1?3, for all T ? 1, the regret satisfies:
2
32?2 Rmax
k?2
C d2
2d T
2
P RegretT ? (d + 1)Rmax +
? 1 ? ? with C =
log [s T ] log
.
?(?? )
?
c2?
Note that the above regret bound depends on the true value of ?? through ?(?? ). The following
theorem provides an upper-bound of the regret independently of the ?? .
Theorem 2 (Problem Independent Upper Bound). Let s = max(1, c2m /?0 ). Then, under
Assumptions 1?3, for all T ? 1, the regret satisfies
s
!
2dT
8Rmax k? ?
P RegretT ? (d + 1)Rmax + Cd log [s T ] T log
? 1 ? ? with C =
.
?
c?
The proofs of Theorems 1?2 can be found in the supplementary material. The main idea is to use
the explicit form of the estimator given by (6) to show that
t?1
X
k
?
mAk ?k
?1 .
kmAt kM ?1
?(m?At ?? ) ? ?(m?At ??t ) ?
t
c?
Mt
k=1
Bounding the last term on the right-hand side is then carried out following the lines of [12].
4.2 Asymptotic Upper Confidence Bound
Preliminary experiments carried out using the value of ?(t) defined equation (8), including the
case where ? is the identity function ?i.e., using the algorithm described by [8, 12], revealed poor
performance for moderate sample sizes. A look into the proof of the regret bound easily explains
this observation as the mathematical involvement of the arguments is such that some approximations
seem unavoidable, in particular several applications of the Cauchy-Schwarz inequality, leading
to pessimistic confidence bounds. We provide here some asymptotic arguments that suggest to
choose significantly smaller exploration bonuses, which will in turn be validated by the numerical
experiments presented in Section 5.
Consider the canonical GLM associated with an inverse link function ? and assume that the
vectors of covariates X are drawn independently under a fixed distribution. This random design
model would for instance describe the situation when the arms are drawn randomly from a fixed
distribution. Standard statistical arguments show that the Fisher information matrix pertaining to
this model is given by J = E[?(X
? ? ?? )XX ? ] and that the maximum likelihood estimate ??t is such
D
D
that t?1/2 (??t ? ?? ) ?? N (0, J ?1 ), where ?? stands for convergence in distribution. Moreover,
a.s.
?
?1
t Mt ?? ? where ? = E[XX ]. Hence, using the delta-method and Slutsky?s lemma
D
kma k?1?1 (?(m? ??t ) ? ?(m? ?? )) ?? N (0, ?(m
? ? ?? )km? k?2?1 km? k2 ?1 ) .
Mt
a
a
a
a ?
a J
The right-hand variance is smaller than k? /c? as J c? ?. Hence, for any sampling distribution
such that J and ? are positive definite and sufficiently large t and small ?,
q
?1
? ?
?
P kma kM ?1 (?(ma ?t ) ? ?(ma ?? )) > 2k? /c? log(1/?)
t
is asymptotically
bounded by ?. Based on the above asymptotic argument, we
ppostulate that using
p
?(t) = 2k? /c? log(t), i.e., inflating the exploration bonus by a factor of k? /c? compared to
the usual UCB setting, is sufficient. This is the setting used in the simulations below.
5
Experiments
To the best of our knowledge, there is currently no public benchmark available to test bandit
methods on real world data. On simulated data, the proposed method unsurprisingly outperforms
its competitors when the data is indeed simulated from a well-specified generalized linear model.
In order to evaluate the potential of the method in more challenging scenarios, we thus carried out
two experiments using real world datasets.
6
5.1 Forest Cover Type Data
In this first experiment, we test the performance of the proposed method on a toy problem using the
?Forest Cover Type dataset? from the UCI repository. The dataset (centered and normalized with
constant covariate added, resulting in 11-dimensional vectors, ignoring all categorical variables)
has been partitioned into K = 32 clusters using unsupervised k-means. The values of the response
variable for the data points assigned to each cluster are viewed as the outcomes of an arm while the
centroid of the cluster is taken as the 11-dimensional vector of covariates characteristic of the arm.
To cast the problem into the logistic regression framework, each response variable is binarized by
associating the first class (?Spruce/Fir?) to a response R = 1 and all other six classes to R = 0.
The proportions of responses equal to 1 in each cluster (or, in other word, the expected reward
associated with each arm) ranges from 0.354 to 0.992, while the proportion on the complete set
of 581,012 data points is equal to 0.367. In effect, we try to locate as fast as possible the cluster
that contains the maximal proportion of trees from a given species. We are faced with a 32-arm
problem in a 11-dimensional space with binary rewards. Obviously, the logistic regression model
is not satisfied, although we do expect some regularity with respect to the position of the cluster?s
centroid as the logistic regression trained on all data reaches a 0.293 misclassification rate.
2000
Regrett
1500
UCB
GLM?UCB
??greedy
1000
500
0
0
1000
2000
3000
4000
5000
t
6000
7000
8000
6000
9000 10000
GLM?UCB
UCB
4000
2000
0
2
4
6
8
10
arm a
12
14
16
18
Figure 1: Top: Regret of the UCB, GLM-UCB and the ?-greedy algorithms. Bottom: Frequencies
of the 20 best arms draws using the UCB and GLM-UCB.
We compare the performance of three algorithms. First, the GLM-UCB algorithm, with
parameters tuned as indicated in Section 4.2. Second, the standard UCB algorithm that ignores
the covariates. Third, an ?-greedy algorithm that performs logistic regression and plays the best
estimated action, At = argmaxa ?(m?a ??t ), with probability 1 ? ? (with ? = 0.1). We observe in
the top graph of Figure 1 that the GLM-UCB algorithm achieves the smallest average regret by a
large margin. When the parameter is well estimated, the greedy algorithm may find the best arm
in little time and then leads to small regrets. However, the exploration/exploitation tradeoff is not
correctly handled by the ?-greedy approach causing a large variability in the regret. The lower plot
of Figure 1 shows the number of times each of the 20 best arms have been played by the UCB
and GLM-UCB algorithms. The arms are sorted in decreasing order of expected reward. It can be
observed that GML-UCB only plays a small subset of all possible arms, concentrating on the bests.
This behavior is made possible by the predictive power of the covariates: by sharing information
between arms, it is possible to obtain sufficiently accurate predictions of the expected rewards of all
actions, even for those that have never (or rarely) been played.
7
5.2 Internet Advertisement Data
In this experiment, we used a large record of the activity of internet users provided by a major ISP.
The original dataset logs the visits to a set of 1222 pages over a six days period corresponding to
about 5.108 page visits. The dataset also contains a record of the users clicks on the ads that were
presented on these pages. We worked with a subset of 208 ads and 3.105 users. The pages (ads)
were partitioned in 10 (respectively, 8) categories using Latent Dirichlet Allocation [15] applied to
their respective textual content (in the case of ads, the textual content was that of the page pointed
to by the ad?s link). This second experiment is much more challenging, as the predictive power of
the sole textual information turns out to be quite limited (for instance, Poisson regression trained on
the entire data does not even correctly identify the best arm).
The action space is composed of the 80 pairs of pages and ads categories: when a pair is chosen,
it is presented to a group of 50 users, randomly selected from the database, and the reward is the
number of recorded clicks. As the average reward is typically equal to 0.15, we use a logarithmic
link function corresponding to Poisson regression. The vector of covariates for each pair is of
dimension 19: it is composed of an intercept followed by the concatenation of two vectors of
dimension 10 and 8 representing, respectively, the categories of the pages and the ads. In this
problem, the covariate vectors do not span the entire space; to address this issue, it is sufficient to
consider the pseudo-inverse of Mt instead of the inverse.
On this data, we compared the GLM-UCB algorithm with the two alternatives described in
Section 5.1. Figure 2 shows that GLM-UCB once again outperforms its competitors, even though
the margin over UCB is now less remarkable. Given the rather limited predictive power of the
covariates in this example, this is an encouraging illustration of the potential of techniques which
use vectors of covariates in real-life applications.
Regret
3000
2000
UCB
GLM?UCB
??greedy
1000
0
0
1000
2000
3000
4000
5000
t
Figure 2: Comparison of the regret of the UCB, GLM-UCB and the ?-greedy (? = 0.1) algorithm
on the advertisement dataset.
6
Conclusions
We have introduced an approach that generalizes the linear regression model studied by [10, 8, 12].
As in the original UCB algorithm, the proposed GLM-UCB method operates directly in the reward
space. We discussed how to tune the parameters of the algorithm to avoid exaggerated optimism,
which would slow down learning. In the numerical simulations, the proposed algorithm was
shown to be competitive and sufficiently robust to tackle real-world problems. An interesting
open problem (already challenging in the linear case) consists in tightening the theoretical results
obtained so far in order to bridge the gap between the existing (pessimistic) confidence bounds and
those suggested by the asymptotic arguments presented in Section 4.2, which have been shown to
perform satisfactorily in practice.
Acknowledgments
This work was supported in part by AICML, AITF, NSERC, PASCAL2 under no 216886, the
DARPA GALE project under no HR0011-08-C-0110 and Orange Labs under contract no 289365.
8
References
[1] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6(1):4?22, 1985.
[2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit
problem. Machine Learning, 47(2):235?256, 2002.
[3] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge Univ Pr, 2006.
[4] J. Audibert, R. Munos, and Cs. Szepesv?ari. Tuning bandit algorithms in stochastic environments. Lecture Notes in Computer Science, 4754:150, 2007.
[5] C.C. Wang, S.R. Kulkarni, and H.V. Poor. Bandit problems with side observations. IEEE
Transactions on Automatic Control, 50(3):338?355, 2005.
[6] J. Langford and T. Zhang. The epoch-greedy algorithm for multi-armed bandits with side
information. Advances in Neural Information Processing Systems, pages 817?824, 2008.
[7] S. Pandey, D. Chakrabarti, and D. Agarwal. Multi-armed bandit problems with dependent
arms. International Conference on Machine learning, pages 721?728, 2007.
[8] V. Dani, T.P. Hayes, and S.M. Kakade. Stochastic linear optimization under bandit feedback.
Conference on Learning Theory, 2008.
[9] S.M. Kakade, S. Shalev-Shwartz, and A. Tewari. Efficient bandit algorithms for online
multiclass prediction. In Proceedings of the 25th International Conference on Machine
learning, pages 440?447. ACM, 2008.
[10] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine
Learning Research, 3:397?422, 2002.
[11] Y. Abbasi-Yadkori, A. Antos, and Cs. Szepesv?ari. Forced-exploration based algorithms for
playing in stochastic linear bandits. In COLT Workshop on On-line Learning with Limited
Feedback, 2009.
[12] P. Rusmevichientong and J.N. Tsitsiklis. Linearly parameterized bandits. Mathematics of
Operations Research, 35(2):395?411, 2010.
[13] P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman and Hall, 1989.
[14] K. Chen, I. Hu, and Z. Ying. Strong consistency of maximum quasi-likelihood estimators in generalized linear models with fixed and adaptive designs. Annals of Statistics,
27(4):1155?1163, 1999.
[15] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Advances
in Neural Information Processing Systems, 14:601?608, 2002.
[16] V.H. De La Pena, M.J. Klass, and T.L. Lai. Self-normalized processes: exponential inequalities, moment bounds and iterated logarithm laws. Annals of Probability, 32(3):1902?1933,
2004.
[17] P. Rusmevichientong and J.N. Tsitsiklis. Linearly parameterized bandits. Arxiv preprint
arXiv:0812.3465v2, 2008.
9
| 4166 |@word repository:1 version:1 briefly:1 exploitation:2 seems:1 norm:2 proportion:3 open:1 km:12 d2:1 simulation:2 hu:1 harder:1 moment:1 contains:3 selecting:1 tuned:3 interestingly:2 past:1 outperforms:2 existing:1 current:2 nt:3 numerical:3 plot:1 greedy:9 selected:1 capitalizes:1 xk:7 record:2 transposition:1 blei:1 provides:1 simpler:1 zhang:1 mathematical:1 along:1 c2:1 chakrabarti:1 consists:4 prove:1 indeed:2 expected:13 behavior:1 multi:5 inspired:1 decreasing:1 alberta:1 actual:2 armed:6 little:1 cardinality:1 increasing:4 becomes:1 project:2 estimating:2 moreover:3 bounded:3 maximizes:1 bonus:5 underlying:1 provided:1 what:2 xx:2 argmin:2 cm:2 rmax:7 developed:1 inflating:1 csaba:1 guarantee:2 pseudo:2 binarized:1 concave:1 tackle:1 exactly:1 k2:2 control:1 positive:3 tends:1 consequence:1 ak:3 optimistically:1 lugosi:1 might:1 twice:1 therein:1 studied:3 challenging:4 limited:3 aitf:1 range:2 practical:2 unique:2 satisfactorily:1 acknowledgment:1 practice:3 regret:22 definite:1 empirical:2 significantly:2 confidence:11 word:1 argmaxa:3 suggest:1 rlai:1 selection:1 context:2 intercept:1 equivalent:1 maximizing:2 attention:1 independently:3 convex:2 survey:1 estimator:10 rule:1 annals:2 pt:2 play:5 ualberta:1 user:4 olivier:1 element:1 database:1 observed:2 role:2 bottom:1 preprint:1 solved:1 wang:1 region:5 ensures:1 connected:1 trade:2 decrease:3 pd:1 environment:1 covariates:10 reward:32 trained:2 depend:2 solving:1 colinear:1 predictive:3 upon:3 basis:2 capp:1 easily:2 isp:1 darpa:1 various:1 univ:1 forced:1 fast:1 describe:1 pertaining:2 choosing:1 outside:1 outcome:1 shalev:1 whose:3 richer:1 widely:1 valued:3 larger:1 supplementary:1 quite:1 statistic:2 fischer:1 online:1 obviously:3 sequence:1 advantage:2 differentiable:3 eigenvalue:1 encouraging:1 propose:2 product:2 maximal:1 fr:3 causing:1 relevant:1 uci:1 description:1 exploiting:1 convergence:1 cluster:6 regularity:1 extending:1 r1:3 wider:1 sarah:1 derive:1 illustrate:1 andrew:1 sole:1 keywords:1 received:6 strong:1 c:2 quantify:1 stochastic:3 exploration:9 centered:1 material:1 public:1 explains:2 generalization:1 preliminary:1 pessimistic:2 strictly:4 hold:4 sufficiently:4 considered:1 around:1 hall:1 exp:4 m0:1 major:1 achieves:1 smallest:2 purpose:1 currently:1 schwarz:1 bridge:1 largest:1 robbins:1 establishes:1 minimization:1 dani:1 offs:1 gaussian:1 always:2 aim:2 rather:2 avoid:1 varying:1 validated:1 bernoulli:1 likelihood:7 check:2 contrast:2 centroid:2 dependent:2 cnrs:3 entire:2 typically:1 initially:1 bandit:37 relation:1 quasi:5 france:3 selects:1 interested:2 issue:2 colt:1 mak:6 initialize:1 special:1 orange:1 field:1 equal:5 once:2 never:1 ng:1 sampling:1 chapman:1 look:3 unsupervised:1 report:1 randomly:2 composed:2 gamma:1 familiar:1 argmax:1 geometry:1 lebesgue:1 ltci:3 mining:1 analyzed:1 semidefinite:1 antos:1 devoted:1 accurate:1 respective:1 orthogonal:2 tree:1 logarithm:2 initialized:1 theoretical:2 instance:3 earlier:1 cover:2 subset:3 chooses:2 density:2 fundamental:1 international:2 aur:1 contract:1 off:1 invertible:2 michael:1 together:1 continuously:3 quickly:1 again:1 abbasi:1 recorded:1 postulate:1 unavoidable:1 satisfied:1 choose:2 possibly:1 slowly:3 fir:1 gale:1 cesa:2 derivative:1 leading:3 elien:1 toy:1 potential:3 filippi:2 ma1:2 de:1 rusmevichientong:2 audibert:1 depends:4 ad:10 try:1 closed:1 optimistic:4 lab:1 sup:1 competitive:1 complicated:1 contribution:4 square:1 variance:1 characteristic:1 efficiently:1 identify:2 generalize:1 dealt:1 iterated:1 networking:1 reach:1 sharing:1 competitor:2 frequency:1 associated:7 proof:2 dataset:5 concentrating:1 knowledge:2 organized:1 ea:1 cappe:1 auer:2 focusing:1 appears:2 ta:4 dt:1 day:1 response:5 though:1 generality:1 just:1 marketing:1 langford:1 glms:3 hand:3 receives:2 web:1 maximizer:1 logistic:8 indicated:1 effect:1 normalized:2 true:1 hence:2 assigned:1 laboratory:1 conditionally:2 game:1 self:2 szepesva:1 noted:1 generalized:14 complete:1 performs:1 ari:3 recently:1 common:2 behaves:1 multinomial:1 mt:13 dubious:1 belong:1 interpretation:1 discussed:1 pena:1 significant:1 multiarmed:1 cambridge:1 tuning:4 rd:9 resorting:1 mathematics:2 automatic:1 pointed:1 consistency:1 kmt:3 gt:5 exaggerated:1 involvement:1 moderate:2 inf:1 prime:1 termed:1 scenario:1 inequality:2 binary:3 life:1 seen:1 maximize:2 period:1 semi:1 reduces:1 clinical:1 long:1 lai:2 visit:2 a1:4 controlled:1 prediction:3 basic:1 regression:13 expectation:1 poisson:5 metric:1 arxiv:2 agarwal:1 kma:10 szepesv:3 addition:2 receive:2 addressed:1 else:1 crucial:1 rest:1 induced:1 effectiveness:1 seem:1 integer:2 jordan:1 counting:1 revealed:1 easy:1 identically:1 timesteps:1 associating:1 click:2 inner:1 reduce:1 idea:1 tradeoff:1 multiclass:1 motivated:1 six:2 handled:1 optimism:1 action:10 regrett:4 tewari:1 tune:1 category:3 mai:2 canonical:4 happened:1 estimated:4 delta:1 correctly:2 shall:2 mat:5 group:1 key:1 drawn:2 garivier:2 asymptotically:2 ma2:1 graph:1 inverse:6 parameterized:3 family:2 throughout:1 draw:1 bound:22 internet:2 followed:1 played:7 slutsky:1 activity:1 infinity:1 worked:1 sake:1 argument:9 span:2 min:1 px:1 structured:3 c2m:3 according:5 poor:4 describes:1 smaller:2 partitioned:2 kakade:2 pr:1 gathering:1 glm:32 taken:1 computationally:1 equation:3 previously:1 discus:1 count:1 turn:2 needed:1 know:2 end:1 adopted:1 generalizes:3 available:2 operation:1 apply:1 observe:2 v2:1 appropriate:3 save:1 alternative:3 yadkori:1 original:2 denotes:3 top:2 dirichlet:2 newton:1 build:1 classical:1 added:1 quantity:1 already:1 parametric:3 strategy:1 rt:9 dependence:1 diagonal:2 usual:1 said:1 link:8 simulated:2 concatenation:1 cauchy:1 trivial:1 mad:1 assuming:3 aicml:1 index:1 ellipsoid:3 illustration:1 ying:1 difficult:1 mostly:1 tightening:1 design:4 unknown:4 perform:1 bianchi:2 upper:7 observation:3 datasets:1 benchmark:1 finite:6 payoff:7 situation:3 extended:1 variability:1 locate:1 arbitrary:1 canada:1 introduced:1 david:1 cast:1 paris:3 specified:1 pair:3 textual:3 address:3 beyond:1 suggested:1 hr0011:1 below:3 max:3 including:1 pascal2:1 power:3 suitable:1 misclassification:1 difficulty:1 natural:1 arm:53 representing:1 brief:1 carried:3 categorical:2 faced:1 prior:1 literature:1 review:1 epoch:1 determining:1 asymptotic:8 unsurprisingly:1 law:1 loss:1 expect:1 highlight:1 lecture:1 interesting:1 allocation:3 var:1 remarkable:2 agent:10 sufficient:2 consistent:1 article:2 playing:7 share:1 cd:1 course:1 supported:1 last:1 tsitsiklis:2 side:6 understand:1 differentiating:1 munos:1 distributed:1 benefit:1 feedback:2 dimension:5 world:4 stand:1 ignores:1 collection:1 made:1 adaptive:2 far:2 social:1 transaction:1 keep:1 unreliable:1 hayes:1 assumed:2 nelder:1 shwartz:1 pandey:1 latent:2 why:1 promising:1 robust:1 ca:1 ignoring:1 forest:2 investigated:1 main:2 linearly:2 bounding:1 x1:1 telecom:6 referred:1 edmonton:1 slow:1 sub:1 position:1 explicit:1 exponential:2 third:1 advertisement:2 admissible:1 rk:7 theorem:6 down:1 specific:3 xt:1 covariate:2 exists:2 kmat:2 workshop:1 cumulated:3 horizon:1 margin:2 gap:1 chen:1 generalizing:4 logarithmic:1 interrelated:1 simply:1 univariate:1 explore:1 contained:1 nserc:1 gml:1 klass:1 satisfies:5 acm:1 ma:8 conditional:1 identity:1 viewed:1 sorted:1 lipschitz:1 fisher:2 content:2 paristech:6 change:1 mccullagh:1 except:1 operates:2 lemma:1 called:6 specie:1 experimental:1 la:1 ucb:45 rarely:1 kulkarni:1 evaluate:1 phenomenon:1 |
3,497 | 4,167 | The Neural Costs of Optimal Control
Samuel J. Gershman and Robert C. Wilson
Psychology Department and Neuroscience Institute
Princeton University
Princeton, NJ 08540
{sjgershm,rcw2}@princeton.edu
Abstract
Optimal control entails combining probabilities and utilities. However, for most
practical problems, probability densities can be represented only approximately.
Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for
achieving this balance and apply it to the problem of how a neural population code
should optimally represent a distribution under resource constraints. The essence
of our analysis is the conjecture that population codes are organized to maximize
a lower bound on the log expected utility. This theory can account for a plethora
of experimental data, including the reward-modulation of sensory receptive fields,
GABAergic effects on saccadic movements, and risk aversion in decisions under
uncertainty.
1
Introduction
Acting optimally under uncertainty requires comparing the expected utility of each possible action,
but in most situations of practical interest this expectation is impossible to calculate exactly: the
hidden states that must be integrated over may be high-dimensional and the probability density may
not take on any simple form. As a consequence, approximations must inevitably be used. Typically
one has a choice of approximation, with more exact approximations demanding more computational
resources, a penalty that can be naturally incorporated into the utility function. The question we
address in this paper is: given a family of approximations and their associated resource demands,
what approximation will lead as close as possible to the optimal control policy?
This is a poignant problem for the brain, which expends a collosal amount of metabolic energy in
building an internal model of the world. Previous theoretical work has studied how ?energy-efficient
codes? might be constructed by the brain to maximize information transfer with the least possible
energy consumption [10]. However, maximizing information transfer is only one component of
adaptive behavior; the utility of information must be taken into account when choosing a code [15],
and this may interact in complicated ways with the computational costs of approximate inference.
Our contribution is to place this problem within a decision-theoretic framework by representing the
choice of approximation as a ?meta-decision? with its own expected utility. Central to our analysis
is the observation that while this expected utility cannot be maximized directly, it is possible to
maximize a variational lower bound on log expected utility (see also [17, 5] for related approaches).
We study the properties of this lower bound and show how it accounts for some intriguing empirical
properties of neural codes.
1
2
Optimal control with approximate densities
Let a denote an action and s denote a hidden state variable drawn from some probability density
p(s).1 Given a utility function U (a; s), the optimal action ap is the one that maximizes expected
utility Vp (a):
ap = argmax Vp (a),
(1)
a
where
Z
Vp (a) = Ep [U (a; s)] =
p(s)U (a; s)ds.
(2)
s
Computing the expected utility for each action requires solving a possibly intractable integral. An
approximation of expected utility can be obtained by substituting an alternative density q(s) for
which the expected utility is tractable. For example, one might choose q(s) to be a Gaussian with
some mean and variance, or a Monte Carlo approximation, or even a delta function at some point.
Using an approximate density presents the ?meta-decision? of which density to use. If one chooses
optimally under q(s), then the expected utility is given by Ep [U (aq ; s)] = Vp (aq ), therefore the
optimal density q ? should be chosen according to
q ? = argmax Vp (aq ),
(3)
q?Q
where Q is some family of densities. To understand Eq. 3, consider the optimization as consisting
of two parts: first, select an approximate density q(s) and choose the optimal action with respect to
this density; then evaluate the true value of that action under the target density. Clearly, if p ? Q,
then q = p is the optimal solution. In general, we cannot optimize this function directly because it
requires solving precisely the integral we are trying to avoid: the expected utility under p(s). We
can, however, use the approximate density to lower-bound the log expected utility under p(s) by
appealing to Jensen?s inequality:
Z
p(s)U (a; s)
log Vp (a) ? q(s) log
ds
q(s)
s
= Eq [log U (a; s)] + Eq [log p(s)] ? Eq [log q(s)],
(4)
Notice the similarity to the evidence lower bound used in variational Bayesian inference [9]: whereas
in variational inference we attempt to lower-bound the log marginal likelihood (evidence), in variational decision theory we attempt to lower-bound the log expected utility.
Examining the utility lower bound, we see that the terms exert conceptually distinct influences:
1. A utility component, Eq [log U (a; s)], the expected log utility under the approximate density.
2. A cross-entropy component, ?Eq [log p(s)], reflecting the mismatch between the approximate density and the target density. This can be thought of as a form of ?sensory prediction
error.?
3. An entropy component, ?Eq [log q(s)], embodying a maximum entropy principle [8]: for a
fixed utility and cross-entropy, choose the distribution with maximal entropy.
Intuitively, a more accurate approximate density q(s) should incur a larger computational cost. One
way to express this notion of cost is to incorporate it directly into the utility function. That is,
we consider an augmented utility function U (a, q; s) that depends on the approximate density. If
we assume that the utility function takes the form log U (a, q; s) = log R(a; s) ? log C(q), where
R(a; s) represents a reward function and C(q) represents a computational cost function, we arrive
at the following modification to the utility lower bound:
L(q, a) = Eq [log R(a; s)] + Eq [log p(s)] ? Eq [log q(s)] ? log C(q).
(5)
1
For the sake of notational simplicity, we implicitly condition on any observed variables. We also refer
throughout this paper to probability densities over a multimdensional, continuous state variable, but our results
still apply to one dimensional and discrete variables (in which case the probability densities are replaced with
probability mass functions).
2
The assumption that the log utility decomposes into additive reward and cost components is intuitive: it implies that reward is measured relative to the computational cost of earning it. In summary,
the utility lower bound L(q, a) provides an objective function for simultaneously choosing an action
and choosing an approximate density over hidden states. Whereas in classical decision theory, optimization is performed over the action space, in variational decision theory optimization is performed
over the joint space of actions and approximate densities. Perception and action are thereby treated
as a single optimization problem.
3
Choosing a probabilistic population code
While the theory developed in the previous section applies to any representation scheme, in this
section, for illustrative purposes, we focus on one specific family of approximate densities defined
by the firing rate of neurons in a network. Specifically, we consider a population of N neurons tasked
with encoding a probability density over s. One way to do this, known as a kernel density estimate
(KDE) code [1, 28], is to associate with each neuron a kernel density fn (s) and then approximate
the target density with a convex combination of the kernel densities:
N
1 X xn
e fn (s),
(6)
Z n=1
PN
where xn denotes the firing rate of neuron n and Z = n=1 exn . We assume that the kernel density
functions are Gaussian, parameterized by a preferred stimulus (mean) sn and a standard deviation
?n :
1
(s ? sn )2
(7)
fn (s) = ?
exp ?
2?n2
2??n
q(s) =
For simplicity, in this paper we will focus on the limiting case in which ? ? 0.2 In this case q(s)
degenerates onto a collection of delta functions:
q(s) =
N
1 X xn
e ?(s ? sn ),
Z n=1
(8)
where ?(?) is the Dirac delta function. This density corresponds to a collection of sharply tuned
neurons; provided that the preferred values {s1 , . . . , sN } densely cover the state space, q(s) can
represent arbitrarily complicated densities by varying the firing rates x.
3.1
Optimizing the bound
Assuming for the moment that there is only a single action, we can state the optimization problem
as follows: given the family of approximate densities parameterized by x, choose the density that
maximizes the utility lower bound
N
1 X xn
e [log U (a; sn ) + log p?(sn ) ? xn ] + log Z ? log B ? log C(q),
(9)
Z n=1
R
where p(s) = p?(s)/B (i.e., p?(s) is the un-normalized target density). Note also that B = s p?(s)ds
does not depend on xn , and hence can be ignored for the purposes of optimization. Technically, the lower bound is not well defined in the limit because the target density is non-atomic
(i.e., has zero mass at any given value). However, approximating the expectations in Eq. 5 by
PN
Eq [g(s)] ? Z ?1 n=1 exn g(sn ), as we do above, can be justified in terms of first-order Taylor
series expansions around the preferred stimuli, which will be arbitrarily accurate as ? ? 0.
L(q, a) =
In the rest of this paper, we shall assume that the cost function takes the following form:
C(q) = ?N + ?
N
X
xn ,
(10)
n=1
2
The case of small, finite ? can be addressed by using a Laplace approximation to the integrals and leads to
small correction terms in the following equations.
3
(b) convolutional coding
(c) gain coding
(d) exponential coding
firing rate
probability density
(a) probability distributions
10
30
50
s
70
10
30
50
70
neuron number
10
30
50
neuron number
70
10
30
50
70
neuron number
Figure 1: Comparison between coding schemes. The leftmost panel shows a collection of probability distributions with different variances, and the other panels show different neural representations of these distributions.
where ? is the fixed cost of maintaining a neuron, and ? is the cost of a spike (c.f. [10]).
We next seek a neuronal update rule that performs gradient ascent on the utility lower bound. Holding the firing rate of all neurons except n fixed, taking the partial derivative of L(q, a) with respect
to xn and setting it to 0, we arrive at the following update rule:
?
?
N
X
Z?
1
?
exj [xj ? log U (a; sj ) ? log p?(sj )] ? xn
xn ? ?log U (a; sn ) + log p?(sn ) +
Z j=1
e C(q)
+
(11)
where [?]+ denotes linear rectification.3 This update rule defines an attractor network whose Lyapunov function is the (negative) utility lower bound. When multiple actions are involved, the bound
can be jointly optimized over a and q by coordinate ascent. While somewhat untraditional, we
note that this update rule is biologically plausible in the sense that it only involves local pairwise
interactions between neurons.
4
4.1
Relation to other probability coding schemes
Exponential, convolutional and gain coding
The probability coding scheme proposed in Eq. 8 is closely related to the exponential coding described in [16]. That scheme also encodes probabilities using exponentiated activities, although
it uses the representation in a very different way and in a network with very different dynamics,
focusing on sequential inference problems instead of the arbitrary decision problems we consider
here. Other related schemes include convolutional coding [28], in which a distribution is encoded
by convolving it with a neural tuning function, and gain coding [11, 27], in which the variance of
the distribution is inversely proportional to the gain of the neural response.
In Figure 1, we show how these three different ways of encoding probability distributions represent
three different Gaussians with variance 2 (black line in Figure 1a), 4 (red) and 10 (blue) units.
Convolutional coding (Figure 1b) is characterized by a neural response pattern that gets broader as
the distribution gets broader. This has been one of the major criticisms of this type of encoding
scheme as this result does not seem to be borne out experimentally (e.g., [19, 2]). In contrast, gain
coding schemes (Figure 1c) posit that changes in uncertainty only change the overall gain, and not
the shape, of the neural response. This leads to predictions that are consistent with experiments, but
limits the type of distributions that can be represented to the exponential family [11].
Finally, Figure 1d shows how the exponential coding scheme we propose represents the distributions
in a manner that can be thought of as in between convolutional coding and gain encoding, with
a population response that gets broader as the encoded distribution broadens, but in a much less
3
This update is equivalent to performing gradient ascent on L with a variable learning rate parameter given
by exZn . We chose this rule as it converges faster and seems more neurally plausible than the pure gradient
ascent.
4
pronounced way than pure convolutional coding. This point is crucial for the biological plausibility
of this scheme, as it seems unlikely that these minute differences in population response width would
be easily measured experimentally.
It is also important to note that both the convolutional and gain coding schemes ignore the utility
function in constructing probabilistic representations. As we explore in later sections, rewards and
costs place strong constraints on the types of codes that are learned by the variational objective, and
the available experimental data is congruent with this view. ?Pure? probabilistic representations may
not exist in the brain.
4.2
Connection to Monte Carlo approximation
Substantial interest has been generated recently in the idea that the brain might use some form of
sampling (i.e., Monte Carlo algorithm) to approximate complicated probability densities. Psychological phenomena like perceptual multistability [6] and speech perception [21] are parsimoniously
explained by a model in which a density over the complete hypothesis space is replaced by a small
set of discrete samples. Thus, it is reasonable to speculate whether our theory of population coding
relates to these at the neural level.
When each neuron?s tuning curve is sharply peaked, the resulting population code resembles importance sampling, a common Monte Carlo method for approximating probability densities, wherein
the approximation consists of a weighted set of samples:
p(s) ?
N
X
w(n) ?(s ? s(n) ),
(12)
n=1
where s(n) is drawn from a proposal density ?(s) and w(n) ? p(s(n) )/?(s(n) ). In fact, we can make
this correspondence precise: for any population code of the form in Eq. 8, there exists an equivalent
importance sampling approximation. The corresponding proposal density takes the form:
X p(sn )
?(s) ?
?(s ? sn ).
(13)
exn
n
This means that optimizing the bound with respect to x is equivalent to selecting a proposal density
so as to maximize utility under resource constraints. A related analysis was made by Vul et al. [26],
though in a more restricted setting, showing that maximal utility is achieved with very few samples
when sampling is costly. Similarly, ?(s) will be sensitive to the computational costs inherent in the
utility lower bound, favoring a small number of samples.
Interestingly, importance sampling has been proposed as a neurally-plausible mechanism for
Bayesian inference [22]. In that treatment, the proposal density was assumed to be the prior, leading
to the prediction that neurons with preferred stimulus s? should occur with frequency proportional
to the prior probability of s? . One source of evidence for this prediction comes from the oblique
effect: the observation that more V1 neurons are tuned to cardinal orientations than to oblique orientations [3], consistent with the statistics of the natural visual environment. In contrast, our model
predicts that the proposal density will be sensitive to rewards in addition to the prior; as we argue in
the section 5.1, a considerable amount of evidence favors this view.
5
Results
In the following sections, we examine some of the neurophysiological and psychological implications of the variational objective. Tying these diverse topics together is the central idea that utilities,
costs and probabilistic beliefs exert a synergistic effect on neural codes and their behavioral outputs.
One consequence of the variational objective is that a clear separation of these components in the
brain may not exist: rewards and costs infiltrate very early sensory areas. These influences result in
distortions of probabilistic belief that appear robustly in experiments with humans and animals.
5.1
Why are sensory receptive fields reward-modulated?
Accumulating evidence indicates that perceptual representations in the brain are modulated by reward expectation. For example, Shuler and Bear [23] paired retinal stimulation of the left and right
5
Natural sounds
Neural code
0.03
Probability
0.025
0.02
0.015
0.01
0.005
0
0
0.2
0.4
0.6
0.8
1
Sound
Figure 2: Grasshopper auditory coding. Probability density of natural sounds and the optimized
approximate density, with black lines demarcating the region of behaviorally relevant sounds.
eyes with reward after different delays and recorded neurons in primary visual cortex that switched
from representing purely physical attributes of the stimulation (e.g., eye of origin) to coding reward
timing. Similarly, Serences [20] showed that spatially selective regions of visual cortex are biased
by the prior reward associated with different spatial locations. These studies raise the possibility that
the brain does not encode probabilistic beliefs separately from reward; indeed, this idea has been
enshrined by a recent theoretical account [4]. One important ramification of this conflation is that it
would appear to violate one of the axioms of statistical decision theory: probabilistic sophistication
[18]. On the other hand, the variational framework we have described accounts for these findings by
showing that decision-making using approximate densities leads automatically to reward-modulated
probabilistic beliefs. Thus, the apparent inconsistency with statistical decision theory may be an
artifact of rational responses to the information-processing constraints of the brain.
To drive this point home, we now analyze one example in more detail. Machens et al. [12] recorded
the responses of grasshopper auditory neurons to different stimulus ensembles and found that the ensembles that elicited the optimal response differed systematically from the natural auditory statistics
of the grasshopper?s environment. In particular, the optimal ensembles were restricted to a region of
stimulus space in which behaviorally important sounds live, namely species-specific mating signals.
In the words of Machens et al., ?an organism may seek to distribute its sensory resources according
to the behavioral relevance of the natural stimuli, rather than according to purely statistical principles.? We modeled this phenomenon by constructing a relatively wide density of natural sounds
with a narrow region of behaviorally relevant sounds (in which states are twice as rewarding). Figure 2 shows the results, confirming that maximizing the utility lower bound selects a kernel density
estimate that is narrower than the target density of natural sounds.
5.2
Changing the cost of a spike
Experimentally, there are at least two ways to manipulate the cost of a spike. One is by changing
the amount of inhibition in the network (e.g., using injections of muscimol, a GABA agonist) and
hence increasing the metabolic requirements for action potential generation. A second method is
by manipulating the availability of glucose [7], either by making the subject hypoglycemic or by
administering local infusions of glucose directly into the brain. We predict that increasing spiking costs (either by reducing glucose levels or increasing GABAergic transmission) will result in a
diminished ability to detect weak signals embedded in noise. Consistent with this prediction, controlled hypoglycemia reduces the speed with which visual changes are detected amidst distractors
[13].
These predictions have received a more direct test in a recent visual search experiment by McPeek
and Keller [14], in which muscimol was injected into local regions of the superior colliculus, a
brain area known to control saccadic target selection. In the absence of distractors, response latencies to the target were increased when it appeared in the receptive fields of the inhibited neurons.
In the presence of distractors, response latencies increased and choice accuracy decreased when
the target appeared in the receptive fields of the inhibited neurons. We simulated these findings
by constructing a cost-field ?(n) to represent the amount of GABAergic transmission at different
neurons induced by muscimol injections. In the distractor condition (Figure 3, top panel), accuracy
6
0.2
0.2
0.1
0.05
0
?50
0.1
0.05
0
0
s
2
0
0
50
s
8
0.15
0.15
6
0.05
0
?50
firing rate
0.2
0.1
0.1
0.05
0
50
0
?50
0
s
s
50
100
neuron number
0.2
q(s)
p(s)
3
1
0
?50
50
4
firing rate
0.15
q(s)
p(s)
0.15
5
Control
Muscimol
50
4
2
0
0
50
100
neuron number
Figure 3: Spiking cost in the superior colliculus. Top panels illustrate distractor condition. Bottom
panels illustrate no-distractor condition. (Left column) Target density, with larger bump in the top
panel representing the target; (Center column) neural code under different settings of cost-field ?(n);
(Right column) firing rates under different cost-fields.
decreases because the increased cost of spiking in the neurons representing the target location dampens the probability density in that location. Increasing spiking cost also reduces the overall firing
rate in the target-representing neurons relative to the distractor-representing neurons. This predicts
increased response latencies if we assume a monotonic relationship with the relative firing rate in
the target-representing neurons. Similarly, in the no-distractor condition (Figure 3, bottom panel),
response latencies increase due to decreased firing rate in the target-representing neurons.
5.3
Non-linear probability weighting
In this section, we show that the variational objective provides a new perspective on some wellknown peculiarities of human probabilistic judgment. In particular, the ostensibly irrational nonlinear weighting of probabilities in risky choice emerges naturally from optimization of the variational objective under a natural assumption about the ecological distribution of rewards.
Tversky and Kahneman [25] observed that people tend to be risk-seeking (over-weighting probabilities) for low-probability gains and risk-averse (under-weighting probabilities) for high-probability
gains. This pattern reverses for losses. The variational objective explains these phenomena by virtue
of the fact that under neural resource constraints, the approximate density will be biased towards
high reward regions of the state space. It is also necessary to assume that the magnitude of gains or
losses scales inversely with probability (i.e., large gains or losses are rare). With this assumption,
the optimized neural code produce the four-fold pattern of risk-attitudes observed by Tversky and
Kahneman (Figure 4).
6
Discussion
We have presented a variational objective function for neural codes that balances motivational, statistical and metabolic demands in the service of optimal behavior. The essential idea is that the
intractable problem of computing expected utilities can be finessed by instead computing expected
utilities under an approximate density that optimizes a variational lower bound on log expected
utility. This lower bound captures the neural costs of optimal control: more accurate approximations will require more metabolic resources, whereas less accurate approximations will diminish the
amount of earned reward. This principle can explain, among other things, why receptive fields of
7
Approximate probability
1
0.8
Gains
Losses
0.6
0.4
0.2
0
0
0.5
Target probability
1
Figure 4: Probability weighting. Simulated calibration curve for gains and losses. Perfect calibration (i.e., linear weighting) is indicated by the dashed line.
sensory neurons have repeatedly been found to be sensitive to reward contingencies. Intuitively,
expending more resources on accurately approximating the complete density of natural sensory
statistics is inefficient (from an optimal control perspective) if the behaviorally relevant signals live
in a compact subspace. We showed that the approximation that maximizes the utility lower bound
concentrates its density within this subspace.
Our variational framework differs in important ways from the one recently proposed by Friston
[4]. In his treatment, utilities are not represented explicitly at all; rather, they are implicit in the
probabilistic structure of the environment. Based on an evolutionary argument, Friston suggests
that high utility states are precisely those that have high probability, since otherwise organisms who
find themselves frequently in low utility states are unlikely to survive. Thus, adopting a control
policy that minimizes a variational upper bound on surprise will lead to optimal behavior. However,
adopting this control policy may lead to pathological behaviors, such as attraction to malign states
that have been experienced frequently (e.g., a person who has been poor her whole life should reject
a winning lottery ticket). In contrast, our variational framework is motivated by quite different
considerations arising from the computational constraints of the brain?s architecture. Nonetheless,
these approaches have in common the idea that probabilistic beliefs will be shaped by the utility
structure of the environment.
The psychological concept of ?bounded rationality? is an old one [24], classically associated with
the observation that humans sometimes adopt strategies for identifying adequate solutions rather
than optimal ones (?satisficing?). The variational framework offers a rather different perspective on
bounded rationality; it asserts that humans are indeed trying to find optimal solutions, but subject
to certain computational resource constraints. By making explicit what these constraints are, and
how they interact at a neural level, our work provides a foundation upon which to develop a more
complete neurobiological theory of optimal control under resource constraints.
Acknowledgments
We thank Matt Botvinick, Matt Hoffman, Chong Wang, Nathaniel Daw and Yael Niv for helpful discussions. SJG was supported by a Quantitative Computational Neuroscience grant from the National
Institutes of Health.
References
[1] C.H. Anderson and D.C. Van Essen. Neurobiological computational systems. Computational
intelligence imitating life, pages 213?222, 1994.
[2] J.S. Anderson, I. Lampl, D.C. Gillespie, and D. Ferster. The contribution of noise to contrast
invariance of orientation tuning in cat visual cortex. Science, 290(5498):1968, 2000.
[3] R.L. De Valois, E. William Yund, and N. Hepler. The orientation and direction selectivity of
cells in macaque visual cortex. Vision Research, 22(5):531?544, 1982.
[4] K. Friston. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience,
11(2):127?138, 2010.
8
[5] T. Furmston and D. Barber. Variational methods for reinforcement learning. Proceedings of
the Thirteenth Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[6] S.J. Gershman, E. Vul, and J.B. Tenenbaum. Perceptual multistability as Markov Chain Monte
Carlo inference. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta,
editors, Advances in Neural Information Processing Systems 22, pages 611?619. 2009.
[7] P.E. Gold. Role of glucose in regulating the brain and cognition. American Journal of Clinical
Nutrition, 61:987S?995S, 1995.
[8] E.T. Jaynes. On the rationale of maximum-entropy methods. Proceedings of the IEEE,
70(9):939?952, 1982.
[9] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational
methods for graphical models. Machine learning, 37(2):183?233, 1999.
[10] W.B. Levy and R.A. Baxter. Energy efficient neural codes. Neural Computation, 8(3):531?543,
1996.
[11] W.J. Ma, J.M. Beck, P.E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes. Nature Neuroscience, 9(11):1432?1438, 2006.
[12] C.K. Machens, T. Gollisch, O. Kolesnikova, and A.V.M. Herz. Testing the efficiency of sensory
coding with optimal stimulus ensembles. Neuron, 47(3):447?456, 2005.
[13] RJ McCrimmon, IJ Deary, BJP Huntly, KJ MacLeod, and BM Frier. Visual information processing during controlled hypoglycaemia in humans. Brain, 119(4):1277, 1996.
[14] R.M. McPeek and E.L. Keller. Deficits in saccade target selection after inactivation of superior
colliculus. Nature neuroscience, 7(7):757?763, 2004.
[15] P.R. Montague and B. King-Casas. Efficient statistics, common currencies and the problem of
reward-harvesting. Trends in cognitive sciences, 11(12):514?519, 2007.
[16] R.P.N. Rao. Bayesian computation in recurrent neural circuits. Neural Computation, 16(1):1?
38, 2004.
[17] M. Sahani. A biologically plausible algorithm for reinforcement-shaped representational learning. Advances in Neural Information Processing, 16, 2004.
[18] L.J. Savage. The Foundations of Statistics. Dover, 1972.
[19] G. Sclar and RD Freeman. Orientation selectivity in the cat?s striate cortex is invariant with
stimulus contrast. Experimental Brain Research, 46(3):457?461, 1982.
[20] J.T. Serences. Value-based modulations in human visual cortex. Neuron, 60(6):1169?1181,
2008.
[21] L. Shi, N.H. Feldman, and T.L. Griffiths. Performing Bayesian inference with exemplar models. In Proceedings of the 30th annual conference of the cognitive science society, pages
745?750, 2008.
[22] Lei Shi and Thomas Griffiths. Neural implementation of hierarchical bayesian inference by importance sampling. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta,
editors, Advances in Neural Information Processing Systems 22, pages 1669?1677. 2009.
[23] M.G. Shuler and M.F. Bear. Reward timing in the primary visual cortex. Science,
311(5767):1606, 2006.
[24] H.A. Simon. Models of Bounded Rationality. MIT Press, 1982.
[25] A. Tversky and D. Kahneman. Advances in prospect theory: cumulative representation of
uncertainty. Journal of Risk and uncertainty, 5(4):297?323, 1992.
[26] E. Vul, N.D. Goodman, T.L. Griffiths, and J.B. Tenenbaum. One and done? Optimal decisions
from very few samples. In Proceedings of the 31st Annual Meeting of the Cognitive Science
Society, Amseterdam, the Netherlands, 2009.
[27] R.C. Wilson and L.H. Finkel. A neural implementation of the kalman filter. In Y. Bengio,
D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural
Information Processing Systems 22, pages 2062?2070. 2009.
[28] R.S. Zemel, P. Dayan, and A. Pouget. Probabilistic interpretation of population codes. Neural
Computation, 10(2):403?430, 1998.
9
| 4167 |@word seems:2 seek:2 thereby:1 moment:1 valois:1 series:1 selecting:1 tuned:2 interestingly:1 casas:1 savage:1 comparing:1 jaynes:1 intriguing:1 must:3 fn:3 additive:1 confirming:1 shape:1 update:5 intelligence:2 dover:1 oblique:2 harvesting:1 provides:3 location:3 constructed:1 direct:1 consists:1 behavioral:2 manner:1 pairwise:1 indeed:2 expected:17 behavior:4 themselves:1 examine:1 distractor:5 frequently:2 brain:15 freeman:1 automatically:1 gollisch:1 increasing:4 provided:1 motivational:1 bounded:3 maximizes:3 mass:2 panel:7 circuit:1 what:2 tying:1 minimizes:1 developed:1 unified:1 finding:2 nj:1 quantitative:1 amidst:1 exactly:1 botvinick:1 control:11 unit:1 grant:1 appear:2 service:1 local:3 timing:2 limit:2 consequence:2 encoding:4 firing:11 modulation:2 approximately:1 ap:2 might:3 black:2 exert:2 chose:1 studied:1 resembles:1 twice:1 suggests:1 finessed:1 practical:2 acknowledgment:1 testing:1 atomic:1 differs:1 area:2 empirical:1 axiom:1 thought:2 reject:1 word:1 griffith:3 get:3 cannot:2 close:1 onto:1 synergistic:1 selection:2 risk:5 impossible:1 influence:2 live:2 accumulating:1 optimize:1 equivalent:3 center:1 maximizing:2 shi:2 williams:3 keller:2 convex:1 simplicity:2 identifying:1 pure:3 pouget:2 rule:5 attraction:1 his:1 population:11 notion:1 coordinate:1 laplace:1 limiting:1 target:17 rationality:3 exact:1 us:1 hypothesis:1 origin:1 machens:3 associate:1 trend:1 predicts:2 ep:2 observed:3 bottom:2 role:1 wang:1 capture:1 calculate:1 region:6 culotta:3 averse:1 earned:1 movement:1 decrease:1 prospect:1 substantial:1 environment:4 mcpeek:2 reward:20 dynamic:1 irrational:1 tversky:3 depend:1 solving:2 raise:1 incur:1 technically:1 purely:2 upon:1 efficiency:1 kahneman:3 easily:1 joint:1 montague:1 represented:3 cat:2 attitude:1 distinct:1 demarcating:1 monte:5 detected:1 artificial:1 zemel:1 broadens:1 choosing:5 whose:1 encoded:2 larger:2 plausible:4 apparent:1 distortion:1 quite:1 otherwise:1 favor:1 statistic:6 ability:1 jointly:1 propose:2 interaction:1 maximal:2 relevant:3 combining:1 ramification:1 degenerate:1 representational:1 gold:1 intuitive:1 asserts:1 pronounced:1 dirac:1 requirement:1 plethora:1 transmission:2 congruent:1 produce:1 perfect:1 converges:1 illustrate:2 develop:1 recurrent:1 exemplar:1 ij:1 measured:2 ticket:1 received:1 eq:14 strong:1 involves:1 implies:1 come:1 revers:1 lyapunov:1 concentrate:1 posit:1 direction:1 closely:1 attribute:1 filter:1 peculiarity:1 human:6 sjg:1 explains:1 require:1 niv:1 biological:1 correction:1 around:1 diminish:1 exp:1 cognition:1 predict:1 bump:1 substituting:1 major:1 early:1 adopt:1 purpose:2 sensitive:3 weighted:1 hoffman:1 mit:1 clearly:1 behaviorally:4 gaussian:2 rather:4 inactivation:1 avoid:1 pn:2 finkel:1 varying:1 wilson:2 broader:3 jaakkola:1 encode:1 focus:2 notational:1 likelihood:1 indicates:1 contrast:5 criticism:1 sense:1 detect:1 helpful:1 inference:9 dayan:1 integrated:1 typically:1 unlikely:2 hidden:3 relation:1 favoring:1 manipulating:1 yund:1 selective:1 selects:1 her:1 overall:2 among:1 orientation:5 grasshopper:3 animal:1 spatial:1 marginal:1 field:8 shaped:2 sampling:6 represents:3 survive:1 peaked:1 stimulus:8 inherent:1 few:2 cardinal:1 inhibited:2 pathological:1 simultaneously:1 densely:1 national:1 parsimoniously:1 beck:1 replaced:2 argmax:2 consisting:1 attractor:1 hepler:1 william:1 attempt:2 interest:2 regulating:1 possibility:1 essen:1 chong:1 chain:1 implication:1 accurate:5 integral:3 partial:1 necessary:1 taylor:1 old:1 theoretical:2 psychological:3 increased:4 column:3 rao:1 cover:1 cost:25 deviation:1 rare:1 delay:1 examining:1 optimally:3 chooses:1 person:1 density:56 st:1 probabilistic:13 rewarding:1 together:1 central:2 recorded:2 choose:4 possibly:1 classically:1 borne:1 cognitive:3 convolving:1 derivative:1 leading:1 inefficient:1 american:1 account:5 distribute:1 potential:1 de:1 retinal:1 speculate:1 coding:20 availability:1 explicitly:1 depends:1 performed:2 later:1 view:2 analyze:1 red:1 complicated:3 elicited:1 simon:1 contribution:2 accuracy:2 convolutional:7 variance:4 who:2 nathaniel:1 maximized:1 ensemble:4 judgment:1 vp:6 conceptually:1 weak:1 bayesian:6 accurately:1 agonist:1 carlo:5 drive:1 explain:1 mating:1 against:1 energy:5 nonetheless:1 frequency:1 involved:1 naturally:2 associated:3 gain:14 auditory:3 rational:1 treatment:2 distractors:3 emerges:1 organized:1 reflecting:1 focusing:1 response:12 wherein:1 done:1 though:1 anderson:2 implicit:1 d:3 hand:1 nonlinear:1 defines:1 bjp:1 artifact:1 indicated:1 lei:1 building:1 effect:3 matt:2 normalized:1 true:1 concept:1 hence:2 spatially:1 during:1 width:1 essence:1 illustrative:1 samuel:1 leftmost:1 trying:2 theoretic:1 complete:3 latham:1 performs:1 variational:21 consideration:1 recently:2 common:3 superior:3 stimulation:2 spiking:4 physical:1 organism:2 interpretation:1 refer:1 glucose:4 feldman:1 tuning:3 shuler:2 rd:1 similarly:3 exj:1 aq:3 calibration:2 entail:1 similarity:1 cortex:7 inhibition:1 own:1 showed:2 recent:2 perspective:3 optimizing:2 optimizes:1 wellknown:1 selectivity:2 certain:1 ecological:1 meta:2 inequality:1 arbitrarily:2 life:2 inconsistency:1 vul:3 meeting:1 somewhat:1 exn:3 maximize:4 dashed:1 signal:3 relates:1 currency:1 multiple:1 rj:1 expending:1 neurally:2 sound:8 violate:1 reduces:2 faster:1 characterized:1 plausibility:1 cross:2 offer:1 clinical:1 manipulate:1 paired:1 controlled:2 prediction:6 vision:1 expectation:3 tasked:1 malign:1 represent:4 kernel:5 adopting:2 sometimes:1 achieved:1 cell:1 justified:1 whereas:3 proposal:5 addition:1 separately:1 addressed:1 decreased:2 thirteenth:1 furmston:1 source:1 crucial:1 goodman:1 biased:2 rest:1 ascent:4 subject:2 induced:1 tend:1 thing:1 lafferty:3 seem:1 jordan:1 presence:1 bengio:3 baxter:1 xj:1 psychology:1 architecture:1 idea:5 whether:1 motivated:1 utility:44 penalty:1 speech:1 action:13 repeatedly:1 adequate:1 ignored:1 latency:4 clear:1 netherlands:1 amount:5 tenenbaum:2 embodying:1 exist:2 lottery:1 notice:1 neuroscience:5 delta:3 arising:1 blue:1 diverse:1 herz:1 discrete:2 shall:1 express:1 four:1 achieving:1 drawn:2 changing:2 v1:1 colliculus:3 parameterized:2 uncertainty:5 injected:1 place:2 family:5 arrive:2 throughout:1 reasonable:1 dampens:1 separation:1 earning:1 home:1 decision:12 bound:23 correspondence:1 fold:1 annual:2 activity:1 occur:1 constraint:9 precisely:2 sharply:2 encodes:1 sake:1 speed:1 argument:1 performing:2 injection:2 relatively:1 conjecture:1 department:1 according:3 combination:1 poor:1 gaba:1 appealing:1 modification:1 s1:1 biologically:2 making:3 intuitively:2 explained:1 restricted:2 imitating:1 invariant:1 taken:1 rectification:1 resource:10 equation:1 mechanism:1 ostensibly:1 tractable:1 available:1 gaussians:1 yael:1 multistability:2 apply:2 hierarchical:1 robustly:1 alternative:1 thomas:1 lampl:1 denotes:2 top:3 include:1 graphical:1 maintaining:1 macleod:1 infusion:1 ghahramani:1 approximating:3 classical:1 society:2 seeking:1 objective:8 question:1 spike:3 receptive:5 saccadic:2 costly:1 primary:2 strategy:1 striate:1 evolutionary:1 gradient:3 subspace:2 thank:1 deficit:1 simulated:2 consumption:1 topic:1 argue:1 barber:1 nutrition:1 assuming:1 code:18 kalman:1 modeled:1 relationship:1 balance:2 robert:1 kde:1 holding:1 negative:1 implementation:2 policy:3 upper:1 observation:3 neuron:29 markov:1 finite:1 inevitably:1 situation:1 incorporated:1 precise:1 arbitrary:1 namely:1 optimized:3 connection:1 learned:1 narrow:1 daw:1 macaque:1 address:1 perception:2 mismatch:1 pattern:3 appeared:2 including:1 belief:5 gillespie:1 demanding:1 treated:1 natural:9 friston:3 representing:8 scheme:11 administering:1 inversely:2 eye:2 risky:1 gabaergic:3 satisficing:1 health:1 sn:11 kj:1 sahani:1 prior:4 review:1 relative:3 embedded:1 loss:5 bear:2 rationale:1 generation:1 proportional:2 gershman:2 foundation:2 aversion:1 switched:1 contingency:1 consistent:3 principle:4 metabolic:4 editor:3 systematically:1 balancing:1 summary:1 supported:1 free:1 sjgershm:1 exponentiated:1 understand:1 institute:2 wide:1 saul:1 taking:1 benefit:1 van:1 curve:2 xn:10 world:1 cumulative:1 sensory:8 collection:3 adaptive:1 made:1 reinforcement:2 bm:1 sj:2 approximate:20 compact:1 ignore:1 implicitly:1 preferred:4 neurobiological:2 assumed:1 continuous:1 un:1 search:1 decomposes:1 why:2 nature:3 transfer:2 schuurmans:3 interact:2 expansion:1 constructing:3 aistats:1 whole:1 noise:2 n2:1 augmented:1 neuronal:1 differed:1 experienced:1 explicit:1 exponential:5 winning:1 perceptual:3 levy:1 weighting:6 minute:1 specific:2 showing:2 jensen:1 virtue:1 evidence:5 intractable:2 exists:1 essential:1 sequential:1 importance:4 magnitude:1 demand:2 surprise:1 entropy:6 sophistication:1 explore:1 neurophysiological:1 visual:10 sclar:1 saccade:1 applies:1 monotonic:1 expends:1 corresponds:1 ma:1 narrower:1 king:1 towards:1 ferster:1 absence:1 considerable:1 experimentally:3 change:3 diminished:1 specifically:1 except:1 reducing:1 acting:1 specie:1 invariance:1 experimental:3 select:1 internal:1 people:1 modulated:3 relevance:1 incorporate:1 evaluate:1 princeton:3 phenomenon:3 |
3,498 | 4,168 | Universal Kernels on Non-Standard Input Spaces
Andreas Christmann
University of Bayreuth
Department of Mathematics
D-95440 Bayreuth
[email protected]
Ingo Steinwart
University of Stuttgart
Department of Mathematics
D-70569 Stuttgart
[email protected]
Abstract
During the last years support vector machines (SVMs) have been successfully applied in situations where the input space X is not necessarily a subset of Rd . Examples include SVMs for the analysis of histograms or colored images, SVMs for
text classification and web mining, and SVMs for applications from computational
biology using, e.g., kernels for trees and graphs. Moreover, SVMs are known to be
consistent to the Bayes risk, if either the input space is a complete separable metric
space and the reproducing kernel Hilbert space (RKHS) H ? Lp (PX ) is dense,
or if the SVM uses a universal kernel k. So far, however, there are no kernels of
practical interest known that satisfy these assumptions, if X 6? Rd . We close this
gap by providing a general technique based on Taylor-type kernels to explicitly
construct universal kernels on compact metric spaces which are not subset of Rd .
We apply this technique for the following special cases: universal kernels on the
set of probability measures, universal kernels based on Fourier transforms, and
universal kernels for signal processing.
1
Introduction
For more than a decade, kernel methods such as support vector machines (SVMs) have belonged
to the most successful learning methods. Besides several other nice features, one key argument
for using SVMs has been the so-called ?kernel trick? [22], which decouples the SVM optimization
problem from the domain of the samples, thus making it possible to use SVMs on virtually any input
space X. This flexibility is in strong contrast to more classical learning methods from both machine
learning and non-parametric statistics, which almost always require input spaces X ? Rd . As a
result, kernel methods have been successfully used in various application areas that were previously
infeasible for machine learning methods. The following, by no means exhaustive, list illustrates this:
? SVMs processing probability measures, e.g. histograms, as input samples have been used to analyze histogram data such as colored images, see [5, 11, 14, 12, 27, 29], and also [17] for nonextensive information theoretic kernels on measures.
? SVMs for text classification and web mining [15, 12, 16],
? SVMs with kernels from computational biology, e.g. kernels for trees and graphs [23].
In addition, several extensions or generalizations of kernel-methods have been considered, see
e.g. [13, 26, 9, 16, 7, 8, 4]. Besides their practical success, SVMs nowadays also possess a rich
1
statistical theory, which provides various learning guarantees, see [31] for a recent account. Interestingly, in this analysis, the kernel and its reproducing kernel Hilbert space (RKHS) make it
possible to completely decouple the statistical analysis of SVMs from the input space X. For example, if one uses the hinge loss and a bounded measurable kernel whose RKHS H is separable
and dense in L1 (?) for all distributions ? on X, then [31, Theorem 7.22] together with [31, Theorem 2.31] and the discussion on [31, page 267ff] shows that the corresponding SVM is universally
classification consistent even without an entropy number assumption if one picks a sequence (?n )
of positive regularization parameters that satisfy ?n ? 0 and n?n / ln n ? ?. In other words,
independently of the input space X, the universal consistency of SVMs is well-understood modulo
an approximation theoretical question, namely that of the denseness of H in all L1 (?).
For standard input spaces X ? Rd and various classical kernels, this question of denseness has been
positively answered. For example, for compact X ? Rd , [30] showed that, among a few others,
the RKHSs of the Gaussian RBF kernels are universal, that is, they are dense in the space C(X)
of continuous functions f : X ? R. With the help of a standard result from measure theory, see
e.g. [1, Theorem 29.14], it is then easy to conclude that these RKHS are also dense in all L1 (?) for
which ? has a compact support. This key result has been extended in a couple of different directions:
For example, [18] establishes universality for more classes of kernels on compact X ? Rd , whereas
[32] shows the denseness of the Gaussian RKHSs in L1 (?) for all distributions ? on Rd . Finally,
[7, 8, 28, 29] show that universal kernels are closely related to so-called characteristic kernels that
can be used to distinguish distributions. In addition, all these papers contain sufficient or necessary
conditions for universality of kernels on arbitrary compact metric spaces X, and [32] further shows
that the compact metric spaces are exactly the compact topological spaces on which there exist
universal spaces.
Unfortunately, however, it appears that neither the sufficient conditions for universality nor the proof
of the existence of universal kernels can be used to construct universal kernels on compact metric
spaces X 6? Rd . In fact, to the best of our knowledge, no explicit example of such kernels has so far
been presented. As a consequence, it seems fair to say that, beyond the X ? Rd -case, the theory of
SVMs is incomplete, which is in contrast to the obvious practical success of SVMs for such input
spaces X as illustrated above.
The goal of this paper is to close this gap by providing the first explicit and constructive examples
of universal kernels that live on compact metric spaces X 6? Rd . To achieve this, our first step is to
extend the definition of the Gaussian RBF kernels, or more generally, kernels that can be expressed
by a Taylor series, from the Euclidean Rd to its infinite dimensional counter part, that is, the space
`2 of square summable sequences. Unfortunately, on the space `2 we face new challenges due to
its infinite dimensional nature. Indeed, the closed balls of `2 are no longer (norm)-compact subsets
of `2 and hence we cannot expect universality on these balls. To address this issue, one may be
tempted to use the weak? -topology on `2 , since in this topology the closed balls are both compact
and metrizable, thus universal kernels do exist on them. However, the Taylor kernels do not belong
to them, because ?basically? the inner product h ? , ? i`2 fails to be continuous with respect to the
weak? -topology as the sequence of the standard orthonormal basis vectors show. To address this
compactness issue we consider (norm)-compact subsets of `2 , only. Since the inner product of `2 is
continuous with respect to the norm by virtue of the Cauchy-Schwarz inequality, it turns out that the
Taylor kernels are continuous with respect to the norm topology. Moreover, we will see that in this
situation the Stone-Weierstra?-argument of [30] yields a variety of universal kernels including the
infinite dimensional extensions of the Gaussian RBF kernels.
However, unlike the finite dimensional Euclidean spaces Rd and their compact subsets, the compact
subsets of `2 can be hardly viewed as somewhat natural examples of input spaces X. Therefore,
we go one step further by considering compact metric spaces X for which there exist a separable
Hilbert space H and an injective and continuous map ? : X ? H. If, in this case, we fix an analytic
function K : R ? R that can be globally expressed by its Taylor series developed at zero and
that has strictly positive Taylor coefficients, then k(x, x0 ) := K(h?(x), ?(x0 )iH ) defines a universal
kernel on X and the same is true for the analogous definition of Gaussian kernels. Although this
situation may look at a first glance even more artificial than the `2 -case, it turns out that quite a few
interesting explicit examples can be derived from this situation. Indeed, we will use this general
result to present examples of Gaussian kernels defined on the set of distributions over some input
space ? and on certain sets of functions.
2
The paper has the following structure. Section 2 contains the main results and constructs examples
for universal kernels based on our technique. In particular, we show how to construct universal
kernels on sets of probability measures and on sets of functions, the latter being interesting for
signal processing. Section 3 contains a short discussion and Section 4 gives the proofs of the main
results.
2
Main result
A kernel k on a set X is a function k : X ? X ? R for which all matrices of the form
(k(xi , xj ))ni,j=1 , n ? N, x1 , . . . , xn ? X, are symmetric and positive semi-definite. Equiva? and a map ?
? : X ? H
? such
lently, k is a kernel if and only there exists a Hilbert space H
0
0
0
?
?
?
?
that k(x, x ) = h?(x), ?(x )iH? for all x, x ? X. While neither H or ? are uniquely determined,
the so-called reproducing kernel Hilbert space (RKHS) of k, which is given by
?
H := hv, ?( ? )i ? : v ? H
H
and kf kH := inf{kvkH? : f = hv, ?( ? )iH? } is uniquely determined, see e.g. [31, Chapter 4.2]. For
more information on kernels, we refer to [31, Chapter 4]. Moreover, for a compact metric space
(X, d), we write C(X) := {f : X ? R | f continuous} for the space of continuous functions on X
and equip this space with the usual supremum norm k ? k? . A kernel k on X is called universal, if
k is continuous and its RKHS H is dense in C(X). As mentioned before, this notion, which goes
back to [30], plays a key role in the analysis of kernel-based learning methods. Let r ? (0, ?].
The kernels we consider in this paper are constructed by functions K : [?r, r] ? R that can be
expressed by its Taylor series, that is
?
X
an tn ,
t ? [?r, r] .
(1)
K(t) =
n=0
For such functions [31, Lemma 4.8] showed that
?
X
k(x, x0 ) := K(hx, x0 iRd ) =
an hx, x0 inRd ,
x, x0 ?
?
rBRd ,
(2)
n=0
?
?
?
defines a kernel on the closed ball rBRd := {x ? Rd : kxk2 ? r} with radius r, whenever all
Taylor coefficients an are non-negative. Following [31], we call such kernels Taylor kernels. [30],
see also [31, Lemma 4.57], showed that Taylor kernels are universal, if an > 0 for all n ? 0, while
[21] notes that strict positivity on certain subsets of indices n suffices.
Obviously, the definition (2) of k is still possible, if oneP
replaces Rd by its infinite dimensional and
separable counterpart `2 := {(wj )j?1 : k(wj )k2`2 := j?1 wj2 < ?}. Let us denote the closed
unit ball in `2 by B`2 , or more generally, the closed unit ball of a Banach space E by BE , that is
BE := {v ? E : kvkE ? 1}. Our first main result shows that this extension leads to a kernel, whose
restrictions to compact subsets are universal, if an > 0 for all n ? N0 := N ? {0}.
Theorem 2.1 Let K : [?r, r] ? R be a function of the form (1). Then we have:
?
?
i) If an ? 0 for all n ? 0, then k : rB`2 ? rB`2 ? R is a kernel, where
?
X
?
k(w, w0 ) := K hw, w0 i`2 =
an hw, w0 in`2 ,
w, w0 ? rB`2 .
(3)
n=0
ii) If an > 0 for all n ? N0 , then the restriction k|W ?W : W ? W ? R of k to an arbitrary
?
compact set W ? rB`2 is universal.
To consider a first explicit example, let K := exp : R ? R be the exponential function. Then
K clearly satisfies the assumptions of Theorem 2.1 for all r > 0, and hence the resulting exponential kernel is universal on every compact subset W of `2 . Moreover, for ? ? (0, ?), the related
Gaussian-type RBF kernel k? : `2 ? `2 ? R defined by
exp(2? 2 hw, w0 i`2 )
k? (w, w0 ) := exp ?? 2 kw ? w0 k2`2 =
(4)
exp(? 2 kwk2`2 ) exp(? 2 kw0 k2`2 )
3
is also universal on every compact W ? `2 , since modulo the scaling by ? it is the normalized
version of the exponential kernel, and thus it is universal by [31, Lemma 4.55].
Although we have achieved our first goal, namely explicit, constructive examples of universal kernels on X 6? Rd , the result is so far not really satisfying. Indeed, unlike the finite dimensional
Euclidean spaces Rd , the infinite dimensional space `2 rarely appears as the input space in realworld applications. The following second result can be used to address this issue.
Theorem 2.2 Let X be a compact metric space and H be a separable Hilbert space such that there
exists a continuous and injective map ? : X ? H. Furthermore, let K : R ? R be a function of
the form (1). Then the following statements hold:
i) If an ? 0 for all n ? N0 , then k : X ? X ? R defines a kernel, where
k(x, x0 ) := K
?(x), ?(x0 )
H
=
?
X
n
an ?(x), ?(x0 ) H ,
x, x0 ? X.
(5)
n=0
ii) If an > 0 for all n ? N0 , then k is a universal kernel.
iii) For ? > 0, the Gaussian-type RBF-kernel k? : X ? X ? R is a universal kernel, where
k? (x, x0 ) := exp ?? 2 k?(x) ? ?(x0 )k2H ,
x, x0 ? X.
(6)
It seems possible that the latter result for the Gaussian-type RBF kernel can be extended to other
positive non-constant radial basis function kernels such as k? (x, x0 ) := exp ?? 2 k?(x) ? ?(x0 )kH
??
or the Student-type RBF kernels k? (x, x0 ) := 1 + ? 2 k?(x) ? ?(x0 )k2H
for ? 2 > 0 and ? ? 1.
d
Indeed, [25] uses the fact that on R such kernels have an integral representation in terms of the
Gaussian RBF kernels to show, see [25, Corollary 4.9], that these kernels inherit approximation
properties such as universality from the Gaussian RBF kernels. We expect that the same arguments
can be made for `2 and then, in a second step, for the situation of Theorem 2.2.
Before we provide some examples of situations in which Theorem 2.2 can be used to define explicit
universal kernels, we point to a technical detail of Theorem 2.2, which may be overseen, thus leading
to wrong conclusions.
To this end, let (X, dX ) be an arbitrary metric space, H be a separable Hilbert space and ? : X ? H
be an injective map. We write V := ?(X) and equip this space with the metric defined by H.
Thus, ? : X ? V is bijective by definition. Moreover, since H is assumed to be separable, it is
isometrically isomorphic to `2 , and hence there exists an isometric isomorphism I : H ? `2 . We
write W := I(V ) and equip this set with the metric defined by the norm of `2 . For a function
f : W ? R, we can then consider the following diagram
f ?I ??
- (R, | ? |)
(X, dX )
?
6
6
f
?
(V, k ? kH )
I
- (W, k ? k` )
2
(7)
Since both ? and I are bijective, it is easy to see that f not only defines a function g : X ? R
by g := f ? I ? ?, but conversely, every function g : X ? R has such a representation and this
representation is unique. In other words, there is a one-to-one relationship between the functions
X ? R and the functions W ? R. Let us now assume that we have a kernel kW on W with RKHS
HW and canonical feature map ?W : W ? HW . Then kX : X ? X ? R, given by
kX (x, x0 ) := kW (I ? ?(x), I ? ?(x0 )) ,
x, x0 ? X,
defines a kernel on X, since
kX (x, x0 ) = kW (I ? ?(x), I ? ?(x0 )) = h?W (I(?(x0 ))), ?W (I(?(x)))iHW ,
4
x, x0 ? X,
shows that ?W ? I ? ? : X ? HW is a feature map of kX . Moreover, [31, Theorem 4.21] shows
that the RKHS HX of kX is given by
HX = hf, ?W ? I ? ?( ? )iHW : f ? HW .
Since, for f ? HW , the reproducing property of HW gives f ? I ? ?(x) = hf, ?W ? I ? ?(x)iHW
for all x ? X we thus conclude that HX = {f ? I ? ? : f ? HW } =: HW ? I ? ?. Let us
now assume that X is compact and that kW is one of the universal kernels considered in Theorem
2.1 or the Gaussian RBF kernel (4). Then the proof of Theorem 2.2 shows that kX is one of the
universal kernels considered in Theorem 2.2. Moreover, if we consider the kernel kV : V ? V ? R
defined by kV (v, v 0 ) := kW (I(v), I(v 0 )), then an analogous argument shows that kV is a universal
kernel. This raises the question, whether we need the compactness of X, or whether it suffices
to assume that ? is injective, continuous and has a compact image V . Surprisingly, the answer is
that it depends on the type of universality one needs. Indeed, if ? is as in Theorem 2.2, then the
compactness of X ensures that ? is a homeomorphism, that is, ??1 : V ? X is continuous, too.
Since I is clearly also a homeomorphism, we can easily conclude that C(X) = C(W ) ? I ? ?,
that is, we have the same relationship as we have for the RKHSs HW and HX . From this, the
universality is easy to establish. Let us now assume the compactness of V instead of the compactness
of X. Then, in general, ? is not a homeomorphism and the sets of continuous functions on X
and V are in general different, even if we consider the set of bounded continuous functions on X.
To see the latter, consider e.g. the map ? : [0, 1) ? S 1 onto the unit sphere S 1 of R2 defined
by ?(t) := (sin(2?t), cos(2?t)). Now this difference makes it impossible to conclude from the
universality of kV (or kW ) to the universality of kX . However, if ?V denotes the topology of V ,
then ??1 (?V ) := {??1 (O) : O ? ?V } defines a new topology on X, which satisfies ??1 (?V ) ? ?X .
Consequently, there are, in general, fewer continuous functions with respect to ??1 (?V ). Now, it
is easy to check that d? (x, x0 ) := k?(x) ? ?(x0 )kH defines a metric that generates ??1 (?V ) and,
since ? is isometric with respect to this new metric, we can conclude that (X, d? ) is a compact
metric space. Consequently, we are back in the situation of Theorem 2.2, and hence kX is universal
with respect to the space C(X, d? ) of functions X ? R that are continuous with respect to d? .
In other words, while HX may fail to approximate every function that is continuous with respect
to dX , it does approximate every function that is continuous with respect to d? . Whether the latter
approximation property is enough clearly depends on the specific application at hand.
Let us now present some universal kernels of practical interest. Please note, that although the function ? in our examples is even linear, the Theorem 2.2 only assumes ? to be continuous and injective.
We start with two examples where X is the set of distributions on some space ?.
Example 1: universal kernels on the set of probability measures.
Let (?, d? ) be a compact metric space, B(?) be its Borel ?-algebra, and X := M1 (?) be the set of
all Borel probability measures on ?. Then the topology describing weak convergence of probability
measures can be metrized, e.g., by the Prohorov metric
dX (P, P0 ) := inf ? > 0 : P(A) ? P0 (A? ) + ? for all A ? B(?) ,
P, P0 ? X ,
(8)
where A? := {? 0 ? ? : d? (?, ? 0 ) < ? for some ? ? A}, see e.g. [2, Theorem 6.8, p. 73].
Moreover, (X, dX ) is a compact metric space if and only if (?, d? ) is a compact metric space, see
[19, Thm. 6.4]. In order to construct universal kernels on (X, dX ) with the help of Theorem 2.2,
it thus remains to find separable Hilbert spaces H and injective, continuous embeddings ? : X ?
H. Let k? be a continuous kernel on ? with RKHS H? and canonical feature map ?? (?) :=
k? (?, ?), ? ? ?. Note that k? is bounded because it is continuous and ? is compact. Then H?
is separable and ?? is bounded and continuous, see [31, Lemmata 4.23, 4.29, 4.33]. Assume that
k? is additionally characteristic, i.e. the function ? : X ? H? defined by the Bochner integral
?(P) := EP ?? is injective. Then the next lemma, which is taken from [10, Thm. 5.1] and which is
a modification of a theorem in [3, p. III. 40], ensures the continuity of ?.
Lemma 2.3 Let (?, d? ) be a complete separable metric space, H be a separable Banach space and
? : ? ? H be a bounded, continuous function. Then ? : M1 (?) ? H defined by ?(P) := EP ? is
continuous, i.e., EPn ? ? EP ?, whenever (Pn )n?N ? M1 (?) converges weakly in M1 (?) to P.
Consequently, the map ? : M1 (?) ? H? satisfies the assumptions of Theorem 2.2, and hence the
Gaussian-type RBF kernel
k? (P, P0 ) := exp ?? 2 kEP ?? ? EP0 ?? k2H? , P, P0 ? M1 (?),
(9)
5
is universal and obviously bounded. Note that this kernel is conceptionally different to characteristic
kernels on ?. Indeed, characteristic kernels live on ? and their RKHS consist of functions ? ?
R, while the new kernel k? lives on M1 (?) and its RKHS consists of functions M1 (?) ? R.
Consequently, k? can be used to learn from samples that are individual distributions, e.g. represented
by histograms, densities or data, while characteristic kernels can only be used to check whether two
of such distributions are equal or not.
Example 2: universal kernels based on Fourier transforms of probability measures.
Consider, the set X := M1 (?), where ? ? Rd is compact. Moreover,
let ? be the Fourier transform
R
? where P(t)
?
(or characteristic function), that is ?(P) := P,
:= eihz,ti d?(z) ? C, t ? Rd . It is
? is uniformly continuous on Rd and
well-known, see e.g. [6, Chap. 9], that, for all P ? M1 (?), P
?
?
kPk? ? 1. Moreover, ? : P 7? P is injective, and if a sequence (Pn ) converges weakly to some
? n ) converges uniformly to P
? on every compact subset of Rd . Now let ? be a finite
P, then (P
d
Borel measure on R with support(?) = Rd , e.g., ? can be any probability distribution on Rd with
Lebesgue density h > 0. Then the previous properties of the Fourier transform can be used to
show that ? : M1 (?) ? L2 (?) is continuous, and hence Theorem 2.2 ensures that the following
Gaussian-type kernel is universal and bounded:
? ?P
? 0 k2
k? (P, P0 ) := exp ?? 2 kP
P, P0 ? M1 (?).
(10)
L2 (?) ,
In view of the previous two examples, we mention that the probability measures P and P0 are often
not directly observable in practice, but only corresponding empirical distributions can be obtained.
In this case, a simple standard technique is to construct histograms to represent these empirical distributions as vectors in a finite-dimensional Euclidean space, although it is well-known that histograms
can yield bad estimates for probability measures. Our new kernels make it possible to directly plug
the empirical distributions into the kernel k? , even if these distributions do not have the same length.
Moreover, other techniques to convert empirical distributions to absolutely continuous distributions
such as kernel estimators derived via weighted averaging of rounded points (WAPRing) and (averaging) histograms with different origins, [20, 24] can be used in k? , too. Clearly, the preferred method
will most likely depend on the specific application at hand, and one benefit of our construction is
that it allows this flexibility.
Example 3: universal kernels for signal processing.
Let (?, A, ?) be an arbitrary measure space and L2 (?) be the usual space of square ?-integrable
functions on ?. Let us additionally assume that L2 (?) is separable, which is typically, but not
always, satisfied. In addition, let us assume that our input values xi ? X are functions taken from
some compact set X ? L2 (?). A typical example, where this situation occurs, is signal processing,
where the true signal f ? L2 ([0, 1]), which is a function of time, cannot be directly observed, but a
smoothed version g := T ? f of the signal is observable. This smoothing can often be described by a
compact linear operator T : L2 ([0, 1]) ? L2 ([0, 1]), e.g., a convolution operator, acting on the true
signals. Hence, if we assume that the true signals are contained in the closed unit ball BL2 ([0,1]) ,
then the observed, smoothed signals T ? f are contained in a compact subset X of L2 ([0, 1]). Let
us now return to the general case introduced above. Then the identity map ? := id : X ? L2 (?)
satisfies the assumptions of Theorem 2.2, and hence the Gaussian-type kernel
k? (g, g 0 ) := exp ?? 2 kg ? g 0 k2L2 (?) ,
g, g 0 ? X,
(11)
defines a universal and bounded kernel on X. As in the previous examples, note that the computation
of k? does not require the functions g and g 0 to be in a specific format such as a certain discretization.
3
Discussion
The main goal of this paper was to provide an explicit construction of universal kernels that are
defined on arbitrary compact metric spaces, which are not necessarily a subset of Rd . There is a
still increasing interest in kernel methods including support vector machines on such input spaces,
e.g. for classification or regression purposes for input values being probability measures, histograms
or colored images. As examples, we gave explicit universal kernels on the set of probability distributions and for signal processing. One direction of further research may be to generalize our results
to the case of non-compact metric spaces or to find quantitative approximation results.
6
4
Proofs
In the following, we write NN
0 for the set of all sequences (ji )i?1 with values in N0 := N ? {0}.
Elements of this set will serve us as multi-indices with countably many components. For j = (ji ) ?
NN
0 , we will therefore adopt the multi-index notation
X
|j| :=
ji .
i?1
Note that |j| < ? implies that j has only finitely many components ji with ji 6= 0.
Lemma 4.1 Assume that n ? N is fixed and that for all j ? NN
0 with |j| = n, we have some
constant cj ? (0, ?). Then for all j ? NN
?j ? (0, ?)
0 with |j| = n + 1, there exists a constant c
such that for all summable sequences (bi ) ? [0, ?) we have
X
X
?
?
?
Y
X
Y
cj
c?j
bji i
bi =
bji i .
j?NN
0 :|j|=n
i=1
i=1
j?NN
0 :|j|=n+1
i=1
Proof: This can be shown by induction, where the induction step is similar to the proof for the
Cauchy product of series.
Lemma 4.2 Assume that n ? N0 is fixed. Then for all j ? NN
0 with |j| = n, there exists a constant
cj ? (0, ?) such that for all summable sequences (bi ) ? [0, ?) we have
X
n
?
?
X
Y
bi
=
cj
bji i .
i=1
j?NN
0 :|j|=n
i=1
Proof: This can be shown by induction using Lemma 4.1.
P
Given a non-empty countable set J and a family w := (wj )j?J ? R, we write kwk22 := j?J wj2 ,
and, as usual, we denote the space of all families for which this quantity is finite by `2 (J). Recall
that `2 (J) together with k ? k2 is a Hilbert space and we denote its inner product by h ? , ? i`2 (J) .
Moreover, `2 := `2 (N) is separable, and by using an orthonormal basis representation, it is further
known that every separable Hilbert space is isometrically isomorphic to `2 . In this sense, `2 can be
viewed as a generic model for separable Hilbert spaces.
The following result provides a method to construct Taylor kernels on closed balls in `2 .
Proposition 4.3 Let r ? (0, ?] and KP: [?r, r] ? R be a function that can be expressed by its
?
n
N
Taylor series given in (1), i.e. K(t)
? an t , t ? [?r, r]. Define J := {j ? N0 : |j| < ?}.
? = n=0
If an ? 0 for all n ? 0, then k : rB`2 ? rB`2 ? R defined by (3), i.e.
?
X
?
k(w, w0 ) := K hw, w0 i`2 =
an hw, w0 in`2 ,
w, w0 ? rB`2 ,
n=0
is a kernel. Moreover, for all j ? J, there exists a constant cj ? (0, ?) such that ? :
`2 (J) defined by
?
Y
?
?(w) := cj
wiji
,
w ? rB`2 ,
i=1
j?J
?
rB`2 ?
(12)
is a feature map of k, where we use the convention 00 := 1.
?
Proof: For w, w0 ? rB`2 , the Cauchy-Schwarz inequality yields |hw, w0 i| ? kwk2 kw0 k2 ? r
and thus k is well-defined. Let wi denote the i-th component of w ? `2 . Since (1) is absolutely
?j ? (0, ?) such
convergent, Lemma 4.2 then shows that, for all j ? NN
0 , there exists a constant c
that
?
?
X
Y
Y
k(w, w0 ) =
a|j| c?j
(wi0 )ji
wiji .
i=1
j?NN
i=1
0
p
Setting cj := a|j| c?j , we obtain that ? defined in (12) is indeed a feature map of k, and hence k is
a kernel.
7
Before we can state our first main result the need to recall the following test of universality from
[31, Theorem 4.56].
Theorem 4.4 Let W be a compact metric space and k be a continuous kernel on W with k(w, w) >
0 for all w ? W . Suppose that we have an injective feature map ? : W ? `2 (J) of k, where J
is some countable set. We write ?j : W ? R for its j-th component, i.e., ?(w) = (?j (w))j?J ,
w ? W . If A := span {?j : j ? J} is an algebra, then k is universal.
With the help of Theorem 4.4 and Proposition 4.3 we can now prove our first main result.
?
Proof of Theorem 2.1: We
? have already seen in Proposition 4.3 that k is a kernel on rB`2 . Let us
now fix a compact W ? rB`2 . For every j ? J, where J is defined in Proposition 4.3, there are
only finitely many components ji with ji 6= 0. Consequently, there exists a bijection between J and
the set of all finite subsets of N. Since the latter is countable, J is countable. Furthermore, we have
k(w, w) =
?
X
an kwk2n
`2 ? a0 > 0
n=0
for all w ? W , and it is obvious, that the components of the feature map ? found in Proposition
4.3 span an algebra. Finally, if we have w, w0 ? W with w 6= w0 , there exists an i ? 1 such that
wi 6= wi0 . For the multi-index j ? J that equals 1 at the i-component and vanishes everywhere else
we then have ?(w) = cj wi 6= cj wi0 = ?(w0 ), and hence ? is injective.
Proof of Theorem 2.2: Since H is separable Hilbert space there exists an isometric isomorphism
I : H ? `2 . We define V := ?(X), see also the diagram in (7). Since ? is continuous, V is
the image of a compact set under a continuous map, and thus V is compact and the inverse of
the bijective map I ? ? : X ? W is continuous. Consequently, there is a one-to-one relationship
between the continuous functions fX on X and the continuous functions fW on W , namely C(X) =
C(W ) ? I ? ?, see also the discussion following (7). Moreover, the fact that I : H ? `2 is an
isometric isomorphism yields hI(?(x)), I(?(x0 ))i`2 = h?(x), ?(x0 )iH for all x, x0 ? X, and hence
the kernel k considered in Theorem 2.2 is of the form kX = kW (I ? ?( ? ), I ? ?( ? )), where kW
is the corresponding kernel defined on W ? `2 considered in Theorem 2.2. Now, the discussion
following (7) showed HX = HW ? I ? ?. Consequently, if we fix a function g ? C(X), then
f := g ? ??1 ? I ?1 ? C(W ) can be approximated by HW , that is, for all ? > 0, there exists an
h ? HW such that kh ? f k? ? ?. Since I ? ? : X ? W is bijective and f ? I ? ? = g, we conclude
that kh ? I ? ? ? gk? ? ?. Now the assertion follows from h ? I ? ? ? HX .
References
[1] H. Bauer. Measure and Integration Theory. De Gruyter, Berlin, 2001.
[2] P. Billingsley. Convergence of probability measures. John Wiley & Sons, New York, 2nd edition, 1999.
[3] N. Bourbaki. Integration I. Chapters 1-6. Springer, Berlin, 2004. Translated from the 1959, 1965, and
1967 French originals by S.K. Berberian.
[4] A. Caponnetto, C.A. Micchelli, M. Pontil, and Y. Ying. Universal multi-task kernels. J. Mach. Learn.
Res., 9:1615?1646, 2008.
[5] O. Chapelle, P. Haffner, and V. Vapnik. SVMs for histogram-based image classification. IEEE Transactions on Neural Networks, 10:1055?1064, 1999.
[6] R. M. Dudley. Real Analysis and Probability. Cambridge University Press, Cambridge, 2002.
[7] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. J. Mach. Learn. Res., 5:73?99, 2005.
[8] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel Dimension Reduction in Regression. Ann. Statist.,
37:1871?1905, 2009.
[9] K. Fukumizu, B. K. Sriperumbudur, A. Gretton, and B. Sch?olkopf. Characteristic kernels on groups
and semigroups. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural
Information Processing Systems 21, pages 473?480. 2009.
8
[10] R. Hable and A. Christmann. Qualitative robustness of support vector machines. arXiv:0912.0874v1,
2009.
[11] M. Hein and O. Bousquet. Kernels, associated structures and generalizations. Technical report, MaxPlanck-Institute for Biological Cybernetics, 2004.
[12] M. Hein and O. Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In
Z. Ghahramani and R. Cowell, editors, AISTATS, pages 136?143, 2005.
[13] M. Hein, O. Bousquet, and B. Sch?olkopf. Maximal margin classification for metric spaces. Journal of
Computer and System Sciences, 71:333?359, 2005.
[14] M. Hein, T. N. Lal, and O. Bousquet. Hilbertian metrics on probability measures and their application in
SVM?s. In C. E. Rasmussen, H. H. B?ulthoff, M. Giese, and B. Sch?olkopf, editors, Pattern Recognition,
Proceedings of the 26th DAGM Symposium, pages 270?277, Berlin, 2004. Springer.
[15] T. Joachims. Learning to Classify Text Using Support Vector Machines. Kluwer Academic Publishers,
Boston, 2002.
[16] J. Lafferty and G. Lebanon. Diffusion kernels on statistical manifolds. J. Mach. Learn. Res., 6:129?163,
2005.
[17] A.F.T. Martins, N.A. Smith, E.P. Xing, P.M.Q. Aguiar, and M.A.T. Figueiredo. Nonextensive information
theoretic kernels on measures. J. Mach. Learn. Res., 10:935?975, 2009.
[18] C. A. Micchelli, Y. Xu, and H. Zhang. Universal kernels. J. Mach. Learn. Res., 7:2651?2667, 2006.
[19] K. R. Parthasarathy. Probability Measures on Metric Spaces. Academic Press, New York, 1967.
[20] E. Parzen. On estimating of a probability density and mode. Ann. Math. Statist., 35:1065?1076, 1962.
[21] A. Pinkus. Strictly positive definite functions on a real inner product space. Adv. Comput. Math., 20:263?
271, 2004.
[22] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Comput., 10:1299?1319, 1998.
[23] B. Sch?olkopf, K. Tsuda, and J. P. Vert. Kernel Methods in Computational Biology. MIT Press, Cambridge,
MA, 2004.
[24] D. Scott. Averaged shifted histograms: Effective nonparametric density estimation in several dimensions.
Ann. Statist., 13:1024?1040, 1985.
[25] C. Scovel, D. Hush, I. Steinwart, and J. Theiler. Radial kernels and their reproducing kernel Hilbert
spaces. Journal of Complexity, 2010, to appear.
[26] A.J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert Space Embedding for Distributions. In
E. Takimoto, editor, Algorithmic Learning Theory, Lecture Notes on Computer Science. Springer, 2007.
Proceedings of the 10th International Conference on Discovery Science, 40-41.
[27] B. Sriperumbudur, K. Fukumizu, A. Gretton, G. Lanckriet, and B. Sch?olkopf. Kernel choice and classifiability for RKHS embeddings of probability distributions. In Y. Bengio, D. Schuurmans, J. Lafferty,
C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages
1750?1758. 2009.
[28] B. Sriperumbudur, K. Fukumizu, and G. Lanckriet. On the relation between universality, characteristic
kernels and RKHS embeddings of measures. In Yee Whye Teh and M. Titterington, editors, AISTATS
2010, Proc. of the 13th International Conference on Artificial Intelligence and Statistics, volume 9, pages
773?780. 2010.
[29] B. Sriperumbudur, K. Fukumizu, and G. Lanckriet. Universality, characteristic kernels and RKHS embeddings of measures. arXiv:1003.0887v1, 2010.
[30] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. J. Mach.
Learn. Res., 2:67?93, 2001.
[31] I. Steinwart and A. Christmann. Support Vector Machines. Springer, New York, 2008.
[32] I. Steinwart, D. Hush, and C. Scovel. Function classes that approximate the Bayes risk. In COLT?06, 19th
Conference on Learning Theory, Pittsburgh, 2006.
9
| 4168 |@word version:2 seems:2 norm:6 nd:1 p0:8 pick:1 mention:1 reduction:2 series:5 contains:2 wj2:2 rkhs:14 interestingly:1 scovel:2 discretization:1 universality:12 dx:6 john:1 analytic:1 n0:7 intelligence:1 fewer:1 smith:1 short:1 colored:3 provides:2 math:2 bijection:1 zhang:1 constructed:1 symposium:1 qualitative:1 consists:1 prove:1 classifiability:1 x0:29 indeed:7 nor:1 multi:4 globally:1 chap:1 considering:1 increasing:1 estimating:1 moreover:14 bounded:8 notation:1 kg:1 developed:1 titterington:1 guarantee:1 quantitative:1 every:8 ti:1 isometrically:2 exactly:1 decouples:1 k2:6 wrong:1 unit:4 appear:1 positive:6 before:3 understood:1 consequence:1 k2l2:1 mach:6 id:1 conversely:1 co:1 bi:4 averaged:1 ulthoff:1 practical:4 unique:1 practice:1 definite:3 pontil:1 area:1 empirical:4 universal:46 vert:1 nonextensive:2 word:3 radial:2 cannot:2 close:2 onto:1 operator:2 risk:2 live:2 impossible:1 yee:1 influence:1 restriction:2 measurable:1 map:16 go:2 williams:1 independently:1 estimator:1 orthonormal:2 embedding:1 notion:1 fx:1 analogous:2 construction:2 play:1 suppose:1 modulo:2 us:3 origin:1 maxplanck:1 trick:1 element:1 lanckriet:3 equiva:1 satisfying:1 approximated:1 recognition:1 ep:3 role:1 observed:2 hv:2 wj:3 ensures:3 adv:1 culotta:1 counter:1 mentioned:1 vanishes:1 complexity:1 raise:1 weakly:2 depend:1 algebra:3 serve:1 completely:1 basis:3 translated:1 easily:1 various:3 chapter:3 represented:1 effective:1 kp:2 artificial:2 exhaustive:1 whose:2 quite:1 say:1 statistic:2 transform:2 obviously:2 sequence:7 eigenvalue:1 product:5 maximal:1 flexibility:2 achieve:1 metrizable:1 kh:6 kv:4 olkopf:7 convergence:2 empty:1 converges:3 help:3 finitely:2 strong:1 homeomorphism:3 christmann:4 implies:1 convention:1 direction:2 radius:1 closely:1 require:2 hx:9 fix:3 generalization:2 suffices:2 really:1 proposition:5 biological:1 extension:3 strictly:2 hold:1 considered:5 exp:10 k2h:3 algorithmic:1 adopt:1 purpose:1 estimation:1 proc:1 schwarz:2 successfully:2 establishes:1 weighted:1 fukumizu:6 uller:1 mit:1 clearly:4 always:2 gaussian:15 pn:2 corollary:1 derived:2 joachim:1 check:2 contrast:2 sense:1 nn:10 dagm:1 typically:1 a0:1 compactness:5 koller:1 kep:1 relation:1 issue:3 classification:6 among:1 colt:1 hilbertian:2 smoothing:1 special:1 integration:2 equal:2 construct:7 biology:3 kw:9 look:1 others:1 report:1 few:2 individual:1 semigroups:1 lebesgue:1 interest:3 mining:2 nowadays:1 integral:2 necessary:1 injective:10 tree:2 incomplete:1 taylor:12 euclidean:4 re:6 tsuda:1 hein:4 theoretical:1 classify:1 assertion:1 subset:13 successful:1 too:2 answer:1 density:4 international:2 rounded:1 together:2 parzen:1 satisfied:1 summable:3 positivity:1 leading:1 return:1 account:1 de:3 student:1 coefficient:2 satisfy:2 explicitly:1 depends:2 view:1 closed:7 analyze:1 start:1 bayes:2 hf:2 xing:1 square:2 ni:1 characteristic:9 yield:4 generalize:1 weak:3 bayreuth:3 basically:1 cybernetics:1 kpk:1 whenever:2 definition:4 sriperumbudur:4 obvious:2 proof:10 associated:1 couple:1 billingsley:1 recall:2 knowledge:1 dimensionality:1 hilbert:15 cj:9 back:2 appears:2 kvkh:1 isometric:4 supervised:1 stuttgart:3 furthermore:2 smola:2 hand:2 steinwart:6 web:2 nonlinear:1 glance:1 continuity:1 defines:8 french:1 mode:1 contain:1 true:4 normalized:1 counterpart:1 regularization:1 hence:11 wi0:3 symmetric:1 illustrated:1 sin:1 during:1 uniquely:2 please:1 giese:1 whye:1 stone:1 bijective:4 complete:2 theoretic:2 tn:1 l1:4 image:6 ji:8 pinkus:1 banach:2 extend:1 belong:1 m1:12 kluwer:1 volume:1 kwk2:2 refer:1 cambridge:3 rd:24 consistency:2 mathematics:2 chapelle:1 longer:1 wiji:2 recent:1 showed:4 inf:2 certain:3 inequality:2 success:2 life:1 integrable:1 seen:1 somewhat:1 bochner:1 signal:10 semi:1 ii:2 gretton:3 caponnetto:1 technical:2 academic:2 plug:1 bach:2 sphere:1 regression:2 metric:27 arxiv:2 histogram:10 kernel:123 represent:1 achieved:1 addition:3 whereas:1 diagram:2 else:1 publisher:1 sch:7 unlike:2 posse:1 strict:1 kwk22:1 virtually:1 lafferty:2 jordan:2 call:1 iii:2 easy:4 enough:1 embeddings:4 variety:1 xj:1 bengio:2 gave:1 topology:7 andreas:2 inner:4 haffner:1 whether:4 bl2:1 isomorphism:3 metrized:1 song:1 ird:1 york:3 hardly:1 generally:2 transforms:2 nonparametric:1 statist:3 svms:17 exist:3 canonical:2 shifted:1 rb:12 write:6 group:1 key:3 neither:2 takimoto:1 diffusion:1 v1:2 graph:2 year:1 convert:1 realworld:1 inverse:1 everywhere:1 almost:1 family:2 scaling:1 epn:1 hi:1 distinguish:1 convergent:1 topological:1 replaces:1 bousquet:4 generates:1 fourier:4 answered:1 argument:4 span:2 separable:16 px:1 format:1 martin:1 department:2 ball:8 son:1 wi:3 lp:1 making:1 modification:1 taken:2 ln:1 mathematik:1 previously:1 turn:2 kw0:2 fail:1 describing:1 remains:1 end:1 apply:1 generic:1 dudley:1 rkhss:3 robustness:1 existence:1 original:1 denotes:1 assumes:1 include:1 hinge:1 ghahramani:1 establish:1 classical:2 micchelli:2 question:3 quantity:1 occurs:1 already:1 parametric:1 usual:3 ihw:3 berlin:3 w0:17 manifold:1 cauchy:3 equip:3 induction:3 besides:2 length:1 index:4 relationship:3 providing:2 ying:1 unfortunately:2 statement:1 gk:1 negative:1 countable:4 teh:1 convolution:1 ingo:2 finite:6 situation:8 extended:2 reproducing:6 smoothed:2 arbitrary:5 thm:2 introduced:1 namely:3 lal:1 hush:2 bourbaki:1 address:3 beyond:1 pattern:1 scott:1 belonged:1 challenge:1 including:2 natural:1 parthasarathy:1 text:3 nice:1 ep0:1 l2:10 kf:1 discovery:1 loss:1 expect:2 lecture:1 interesting:2 sufficient:2 consistent:2 theiler:1 editor:6 surprisingly:1 last:1 rasmussen:1 infeasible:1 denseness:3 figueiredo:1 institute:1 face:1 benefit:1 bauer:1 dimension:2 xn:1 rich:1 made:1 universally:1 far:3 transaction:1 lebanon:1 approximate:3 compact:39 uni:2 observable:2 preferred:1 countably:1 supremum:1 pittsburgh:1 conclude:6 assumed:1 xi:2 continuous:33 decade:1 additionally:2 nature:1 learn:7 schuurmans:2 bottou:1 necessarily:2 domain:1 inherit:1 aistats:2 dense:5 main:7 edition:1 fair:1 positively:1 x1:1 xu:1 ff:1 borel:3 wiley:1 fails:1 explicit:8 exponential:3 comput:2 kxk2:1 hw:18 theorem:29 bad:1 specific:3 list:1 r2:1 svm:4 virtue:1 exists:11 consist:1 ih:4 vapnik:1 illustrates:1 kx:9 margin:1 gap:2 boston:1 entropy:1 likely:1 expressed:4 contained:2 cowell:1 springer:4 satisfies:4 bji:3 ma:1 gruyter:1 goal:3 viewed:2 identity:1 consequently:7 rbf:11 ann:3 tempted:1 aguiar:1 fw:1 infinite:5 determined:2 uniformly:2 typical:1 averaging:2 acting:1 decouple:1 lemma:10 called:4 isomorphic:2 rarely:1 support:9 latter:5 absolutely:2 constructive:2 |
3,499 | 4,169 | Phone Recognition with the Mean-Covariance
Restricted Boltzmann Machine
George E. Dahl, Marc?Aurelio Ranzato, Abdel-rahman Mohamed, and Geoffrey Hinton
Department of Computer Science
University of Toronto
{gdahl, ranzato, asamir, hinton}@cs.toronto.edu
Abstract
Straightforward application of Deep Belief Nets (DBNs) to acoustic modeling
produces a rich distributed representation of speech data that is useful for recognition and yields impressive results on the speaker-independent TIMIT phone recognition task. However, the first-layer Gaussian-Bernoulli Restricted Boltzmann
Machine (GRBM) has an important limitation, shared with mixtures of diagonalcovariance Gaussians: GRBMs treat different components of the acoustic input
vector as conditionally independent given the hidden state. The mean-covariance
restricted Boltzmann machine (mcRBM), first introduced for modeling natural images, is a much more representationally efficient and powerful way of modeling
the covariance structure of speech data. Every configuration of the precision units
of the mcRBM specifies a different precision matrix for the conditional distribution over the acoustic space. In this work, we use the mcRBM to learn features
of speech data that serve as input into a standard DBN. The mcRBM features
combined with DBNs allow us to achieve a phone error rate of 20.5%, which is
superior to all published results on speaker-independent TIMIT to date.
1
Introduction
Acoustic modeling is a fundamental problem in automatic continuous speech recognition. Most
state of the art speech recognition systems perform acoustic modeling using the following approach
[1]. The acoustic signal is represented as a sequence of feature vectors; these feature vectors typically hold a log spectral estimate on a perceptually warped frequency scale and are augmented
with the first and second (at least) temporal derivatives of this spectral information, computed using
smoothed differences of neighboring frames. Hidden Markov models (HMMs), with Gaussian mixture models (GMMs) for the emission distributions, are used to model the probability of the acoustic
vector sequence given the (tri)phone sequence in the utterance to be recognized.1 Typically, all of the
individual Gaussians in the mixtures are restricted to have diagonal covariance matrices and a large
hidden Markov model is constructed from sub-HMMs for each triphone to help deal with the effects of context-dependent variations. However, to mitigate the obvious data-sparsity and efficiency
problems context dependence creates, modern systems perform sophisticated parameter tying by
clustering the HMM states using carefully constructed decision trees to make state tying choices.
Although systems of this sort have yielded many useful results, diagonal covariance CDHMM
models have several potential weaknesses as models of speech data. On the face of things at
least, feature vectors for overlapping frames are treated as independent and feature vectors must
be augmented with derivative information in order to enable successful modeling with mixtures of
diagonal-covariance Gaussians (see [2, 3] for a more in-depth discussion of the exact consequences
of the delta features). However, perhaps even more disturbing than the frame-independence assumption are the compromises required to deal with two competing pressures in Gaussian mixture model
1
We will refer to HMMs with GMM emission distributions as CDHMMs for continuous-density HMMs.
1
training: the need for expressive models capable of representing the variability present in real speech
data and the need to combat the resulting data sparsity and statistical efficiency issues. These pressures of course exist for other models as well, but the tendency of GMMs to partition the input space
into regions where only one component of the mixture dominates is a weakness that inhibits efficient
use of a very large number of tunable parameters. The common decision to use diagonal covariance
Gaussians for the mixture components is an example of such a compromise of expressiveness that
suggests that it might be worthwhile to explore models in which each parameter is constrained by a
large fraction of the training data. By contrast, models that use the simultaneous activation of a large
number of hidden features to generate an observed input can use many more of their parameters to
model each training example and hence have many more training examples to constrain each parameter. As a result, models that use non-linear distributed representations are harder to fit to data, but
they have much more representational power for the same number of parameters.
The diagonal covariance approximation typically employed for GMM-based acoustic models is
symptomatic of, but distinct from, the general representational inefficiencies that tend to crop up
in mixture models with massive numbers of highly specialized, distinctly parameterized mixture
components. Restricting mixture components to have diagonal covariance matrices introduces a
conditional independence assumption between dimensions within a single frame. The delta-feature
augmentation mitigates the severity of the approximation and thus makes outperforming diagonal
covariance Gaussian mixture models difficult. However, a variety of precision matrix modeling
techniques have emerged in the speech recognition literature. For example, [4] describes a basis
superposition framework that includes many of these techniques.
Although the recent work in [5] on using deep belief nets (DBNs) for phone recognition begins to
attack the representational efficiency issues of GMMs, Gaussian-Bernoulli Restricted Boltzmann
Machines (GRBMs) are used to deal with the real-valued input representation (in this case, melfrequency cepstral coefficients). GRBMs model different dimensions of their input as conditionally
independent given the hidden unit activations, a weakness akin to restricting Gaussians in a GMM
to have diagonal covariance. This conditional independence assumption is inappropriate for speech
data encoded as a sequence of overlapping frames of spectral information, especially when many
frames are concatenated to form the input vector. Such data can exhibit local smoothness in both
frequency and time punctuated by bursts of energy that violate these local smoothness properties.
Performing a standard augmentation of the input with temporal derivative information, as [5] did,
will of course make it easier for GRBMs to deal with such data, but ideally one would use a model
capable of succinctly modeling these effects on its own.
Inspired by recent successes in modeling natural images, the primary contribution of this work is to
bring the mean-covariance restricted Boltzmann machine (mcRBM) of [6] to bear on the problem
of extracting useful features for phone recognition and to incorporate these features into a deep
architecture similar to one described in [5]. We demonstrate the efficacy of our approach by reporting
results on the speaker-independent TIMIT phone recognition task. TIMIT, as argued in [7], is an
ideal dataset for testing new ideas in speech recognition before trying to scale them up to large
vocabulary tasks because it is phonetically rich, has well-labeled transcriptions, and is small enough
not to pose substantial computational challenges at test time. Our best system achieves a phone
error rate on the TIMIT corpus of 20.5%, which is superior to all published results on speakerindependent TIMIT to date. We obtain these results without augmenting the input with temporal
difference features since a sensible model of speech data should be able to learn to extract its own
useful features that make explicit inclusion of difference features unnecessary.
2
Using Deep Belief Nets for Phone Recognition
Following the approach of [5], we use deep belief networks (DBNs), trained via the unsupervised
pretraining algorithm described in [8], combined with supervised fine-tuning using backpropagation,
to model the posterior distribution over HMM states given a local window of the acoustic input. We
construct training cases for the DBN by taking n adjacent frames of acoustic input and pairing
them with the identity of the HMM state for the central frame. We obtain the labels from a forced
alignment with a CDHMM baseline. During the supervised phase of learning, we optimize the crossentropy loss for the individual HMM-state predictions, as a more convenient proxy for the number
of mistakes (insertions, deletions, substitutions) in the phone sequence our system produces, which
2
is what we are actually interested in. In order to compare with the results [5], at test time, we use
the posterior probability distribution over HMM states that the DBN produces in place of GMM
likelihoods in an otherwise standard Viterbi decoder. Since the HMM defines a prior over states, it
is better to divide the posterior probabilities of the DBN by the frequencies of the 183 labels in the
training data [9], but in our experiments this did not noticeably change the results.
3
The Mean-Covariance Restricted Boltzmann Machine
The previous work of [5] used a GRBM for the initial DBN layer. The GRBM associates each
configuration of the visible units, v, and hidden units, h, with a probability density according to
P (v, h) ? e?E(v,h) ,
(1)
where E(v, h) is given by
1
(v ? b)T (v ? b) ? cT h ? vT Wh,
(2)
2
and where W is the matrix of visible/hidden connection weights, b is a visible unit bias, and c is
a hidden unit bias. Equation 2 implicitly assumes that the visible units have a diagonal covariance
Gaussian noise model with a variance of 1 on each dimension.
E(v, h) =
Another option for learning to extract binary features from real-valued data that has enjoyed success
in vision applications is the mean-covariance RBM (mcRBM), first introduced in [10] and [6]. The
mcRBM has two groups of hidden units: mean units and precision units. Without the precision
units, the mcRBM would be identical to a GRBM. With only the precision units, we have what
we will call the ?cRBM?, following the terminology in [6]. The precision units are designed to
enforce smoothness constraints in the data, but when one of these constraints is seriously violated,
it is removed by turning off the precision unit. The set of active precision units therefore specifies
a sample-specific covariance matrix. In order for a visible vector to be assigned high probability
by the precision units, it must only fail to satisfy a small number of the precision unit constraints,
although each of these constraints could be egregiously violated.
The cRBM can be viewed as a particular type of factored third order Boltzmann machine. In other
words, the RBM energy function is modified to have multiplicative interactions between triples of
two visible units, vi and vj , and one hidden unit hk . Unrestricted 3-way connectivity causes a cubic
growth in the number of parameters that is unacceptable if we wish to scale this sort of model to
high dimensional data. Factoring the weights into a sum of 3-way outer products can reduce the
growth rate of the number of parameters in the model to one that is comparable to a normal RBM.
After factoring, we may write the cRBM energy function2 (with visible biases omitted) as:
E(v, h) = ?dT h ? (vT R)2 Ph,
(3)
where R is the visible-factor weight matrix, d denotes the hidden unit bias vector, and P is the
factor-hidden, or ?pooling? matrix. The squaring in equation 3 (and in other equations with this
term) is performed elementwise. We force P to only have non-positive entries. We must constrain
P in this way to avoid a model that assigns larger and larger probabilities (more negative energies)
to larger and larger inputs.
The hidden units of the cRBM are still (just as in GRBMs) conditionally independent given the states
of the visible units, so inference remains simple. However, the visible units are coupled in a Markov
Random Field determined by the settings of the hidden units. The interaction weight between two
arbitrary visible units vi and vj , which we shall denote w
?i,j , depends on the states of all the hidden
units according to:
XX
w
?i,j =
hk rif rjf pkf .
k
f
The conditional distribution of the hidden units (derived from 3) given the visible unit states v is:
T
P (h|v) = ? d + (vT R)2 P
,
2
In order to normalize the distribution implied by this energy function, we must restrict the visible units to a
region of the input space that has finite extent. However, once we add the mean RBM this normalization issue
vanishes.
3
where ? denotes the elementwise logistic sigmoid, ?(x) = (1+e?x )?1 . The conditional distribution
of the visible units given the hidden unit states for the cRBM is given by:
?1
.
(4)
P (v|h) ? N 0, R diag(?PT h) RT
The cRBM always assigns highest probability to the all zero visible vector. In order to allow the
model to shift the mean, we add an additional set of binary hidden units whose vector of states we
shall denote m. The product of the distributions defined by the cRBM and the GRBM forms the
mcRBM. If EC (v, h) denotes the cRBM energy function (equation 3) and EM (v, m) denotes the
GRBM energy function (equation 2), then the mcRBM energy function is:
EM C (v, h, m) = EC (v, h) + EM (v, m).
(5)
The gradient of the EM term moves the minimum of EM C away from the zero vector, but how far
it moves depends on the curvature of the precision matrix defined by EC . The resulting conditional
distribution over the visible units, given the two sets of hidden units is:
P (v|h, m) ? N (?Wm, ?) ,
where
?1
? = R diag(?PT h) RT
.
Thus the mcRBM can produce conditional distributions over the visible units, given the hidden units,
that have non-zero means, unlike the cRBM.
Just like other RBMs, the mcRBM can be trained using the following update rule, for some generic
model parameter ?:
?E
?E
?? ? h?
i
+h
i
.
?? data
?? reconstruction
However, since the matrix inversion required to sample from P (v|h, m) can be expensive, we integrate out the hidden units and use Hybrid Monte Carlo (HMC) [11] on the mcRBM free energy to
obtain the reconstructions.
It is important to emphasize that the mcRBM model of covariance structure is much more powerful
than merely learning a covariance matrix in a GRBM. Learning the covariance matrix for a GRBM
is equivalent to learning a single global linear transformation of the data, whereas the precision
units of an mcRBM are capable of specifying exponentially many different covariance matrices and
explaining different visible vectors with different distributions over these matrices.
3.1
Practical details
In order to facilitate stable training, we make the precision unit term in the energy function insensitive to the scale of the input data by normalizing by the length of v. This makes the conditional
P (v|h) clearly non-Gaussian. We constrain the columns of P to have unit L1 norm and to be sparse.
We enforce one-dimensional locality and sparsity in P by setting entries beyond a distance of one
from the main diagonal to zero after every update. Additionally, we constrain the columns of R to
all have equal L2 norms and learn a single global scaling factor shared across all the factors. The
non-positivity constraint on the entries of P is maintained by zeroing out, after each update, any
entries that become positive.
4
Deep Belief Nets
Learning is difficult in densely connected, directed belief nets that have many hidden layers because
it is difficult to infer the posterior distribution over the hidden variables, when given a data vector,
due to the phenomenon of explaining away. Markov chain Monte Carlo methods [12] can be used
to sample from the posterior, but they are typically very time-consuming. In [8] complementary
priors were used to eliminate the explaining away effects, producing a training procedure which is
equivalent to training a stack of restricted Boltzmann machines.
The stacking procedure works as follows. Once an RBM has been trained on data, we can infer
the hidden unit activation probabilities given a data vector and re-represent the data vector as the
vector of corresponding hidden activations. Since the RBM has been trained to reconstruct the data
4
h3
W3
h2
W2
m
h
P
W
R
v
Figure 1: An mcRBM with two RBMs stacked on top
well, the hidden unit activations will retain much of the information present in the data and pick
up (possibly higher-order) correlations between different data dimensions that exist in the training
set. Once we have used one RBM as a feature extractor we can, if desired, train an additional RBM
that treats the hidden activations of the first RBM as data to model. After training a sequence of
RBMs, we can compose them to form a generative model whose top two layers are the final RBM
in the stack and whose lower layers all have downward-directed connections that implement the
p(hk?1 |hk ) learned by the k th RBM, where h0 = v.
The weights obtained by the greedy layer-by-layer training procedure described for stacking RBMs,
above, can be used to initialize the weights of a deep feed-forward neural network. Once we add
an output layer to the pre-trained neural network, we can discriminatively fine-tune the weights of
this neural net using any variant of backpropagation [13] we wish. Although options for fine-tuning
exist other than backpropagation, such as the up-down algorithm used in [8], we restrict ourselves
to backpropagation (updating the weights every 128 training cases) in this work for simplicity and
because it is sufficient for obtaining excellent results.
Figure 1 is a diagram of two RBMs stacked on top of an mcRBM. Note that the RBM immediately
above the mcRBM uses both the mean unit activities and the precision unit activities together as
visible data. Later, during backpropagation, after we have added the softmax output unit, we do
not backpropagate through the mcRBM weights, so the mcRBM is a purely unsupervised feature
extractor.
5
Experimental Setup
5.1
The TIMIT Dataset
We used the TIMIT corpus3 for all of our phone recognition experiments. We used the 462 speaker
training set and removed all SA records (i.e., identical sentences for all speakers in the database),
since they could potentially bias our results. A development set of 50 speakers was used for handtuning hyperparameters and automated decoder tuning. As is standard practice, results are reported
using the 24-speaker core test set. We produced the training labels with a forced alignment of an
HMM baseline. Since there are three HMM states per phone and 61 phones, all DBN architectures
had a 183-way softmax output unit. Once the training labels have been created, the HMM baseline
3
http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC93S1.
5
is no longer needed; we do not combine or average our results with any HMM+GMM system. After
decoding, starting and ending silences were removed and the 61 phone classes were mapped to a set
of 39 classes as in [14] for scoring. We removed starting and ending silences before scoring in order
to be as similar to [5] as possible. However, to produce a more informative comparison between our
results and results in the literature that do not remove starting and ending silences, we also present
the phone error rate of our best model using the more common scoring strategy. During decoding,
we used a simple bigram language model over phones. Our results would certainly improve with
a trigram language model. In order to be able to make useful comparisons between different DBN
architectures (and achieve the best results), we optimized the Viterbi decoder parameters (the word
insertion probability and the language model scale factor) on the development set and then used the
best performing setting to compute the phone error rate (PER) for the core test set.
5.2
Preprocessing
Since we have completely abandoned Gaussian mixture model emission distributions, we are no
longer forced to use temporal derivative features. For all experiments the acoustic signal was analyzed using a 25-ms Hamming window with 10-ms between the left edges of successive frames. We
use the output from a mel scale filterbank, extracting 39 filterbank output log magnitudes and one
log energy per frame. Once groups of 15 frames have been concatenated, we perform PCA whitening and preserve the 384 most important principal components. Since we perform PCA whitening
anyway, the discrete cosine transform used to compute mel frequency cepstral coefficients (MFCCs)
from the filterbank output is not useful. Determining the number of frames of acoustic context to
give to the DBN is an important preprocessing decision; preliminary experiments revealed that moving to 15 frames of acoustic data, from the 11 used in [5], could provide improvements in PER when
training a DBN on features from a mcRBM. It is possible that even larger acoustic contexts might
be beneficial as well. Also, since the mcRBM is trained as a generative model, doubling the input
dimensionality by using a 5-ms advance per frame is unlikely to cause serious overfitting and might
well improve performance.
5.3
Computational Setup
Training DBNs of the sizes used in this paper can be computationally expensive. We accelerated
training by exploiting graphics processors, in particular GPUs in a NVIDIA Tesla S1070 system,
using the wonderful library described in [15]. The wall time per epoch varied with the architecture.
An epoch of training of an mcRBM that had 1536 hidden units (1024 precision units and 512 mean
units) took 20 minutes. When each DBN layer had 2048 hidden units, each epoch of pre-training for
the first DBN layer took about three minutes and each epoch of pretraining for the fifth layer took
seven to eight minutes, since we propagated through each earlier layer. Each epoch of fine-tuning
for such a five-DBN-layer architecture took 12 minutes. We used 100 epochs to train the mcRBM,
50 epochs to train each RBM in the stack and 14 epochs of discriminative fine-tuning of the whole
network for a total of nearly 60 hours, about 34 of which were spent training the mcRBM.
6
Experiments
Since one goal of this work is to improve performance on TIMIT by using deep learning architectures, we explored varying the number of DBN layers in our architecture. In agreement with [5], we
found that in order to obtain the best results with DBNs on TIMIT, multiple layers were essential.
Figure 2 plots phone error rate on both the development set and the core test set against the number
of hidden layers in a mcRBM-DBN (we don?t count the mcRBM as a hidden layer since we do not
backpropagate through it). The particular mcRBM-DBN shown had 1536 hidden units in each DBN
hidden layer, 1024 precision units in the mcRBM, and 512 mean units in the mcRBM. As the number
of DBN hidden layers increased, error on the development and test sets decreased and eventually
leveled off. The improvements that deeper models can provide over shallower models were evident
from results reported in [5]; the results for the mcRBM-DBN in this work are even more dramatic. In
fact, an mcRBM-DBN with 8 hidden layers is what exhibits the best development set error, 20.17%,
in these experiments. The same model gets 21.7% on the core test set (20.5% if starting and ending
silences are included in scoring). Furthermore, at least 5 DBN hidden layers seem to be necessary
6
25
Dev Set
Test Set
Phone Error Rate (PER)
24
23
22
21
201
2
3
4
5
7
6
Number of DBN Hidden Layers
8
9
Figure 2: Effect of increasing model depth
Table 1: The effect of DBN layer size on Phone Error Rate for 5 layer mcRBM-DBN models
Model
512 units
1024 units
1536 units
2048 units
devset
21.4%
20.9%
20.4%
20.4%
testset
22.8%
22.3%
21.9%
21.8%
to break a test set PER of 22%. Models of this depth (note also that an mcRBM-DBN with 8 DBN
hidden layers is really a 9 layer model) have rarely been employed in the deep learning literature (cf.
[8, 16], for example).
Table 1 demonstrates that once the hidden layers are sufficiently large, continuing to increase the
size of the hidden layers did not seem to provide additional improvements. In general, we did not
find our results to be very sensitive to the exact number of hidden units in each layer, as long the
hidden layers were relatively large.
To isolate the advantage of using an mcRBM instead of a GRBM, we need a clear comparison that is
not confounded by the differences in preprocessing between our work and [5]. Table 2 provides such
a comparison and confirms that the mcRBM feature extraction causes a noticeable improvement in
PER. The architectures in table 2 use 1536-hidden-unit DBN layers.
Table 3 compares previously published results on the speaker-independent TIMIT phone recognition
task to the best mcRBM-DBN architecture we investigated. Results marked with a * remove starting
Table 2: mcRBM-DBN vs GRBM-DBN Phone Error Rate
Model
5 layer GRBM-DBN
mcRBM + 4 layer DBN
devset PER
22.3%
20.6%
7
testset PER
23.7%
22.3%
Table 3: Reported (speaker independent) results on TIMIT core test set
Method
Stochastic Segmental Models [17]
Conditional Random Field [18]
Large-Margin GMM [19]
CD-HMM [20]
Augmented conditional Random Fields [20]
Recurrent Neural Nets [21]
Bayesian Triphone HMM [22]
Monophone HTMs [23]
Heterogeneous Classifiers [24]
Deep Belief Networks(DBNs) [5]
Triphone HMMs discriminatively trained w/ BMMI [7]
Deep Belief Networks with mcRBM feature extraction (this work)
Deep Belief Networks with mcRBM feature extraction (this work)
PER
36%
34.8%
33%
27.3%
26.6%
26.1%
25.6%
24.8%
24.4%
23.0*%
22.7%
21.7*%
20.5%
and ending silences at test time before scoring. One should note that the work of [7] used triphone
HMMs and a trigram language model whereas in this work we used only a bigram language model
and monophone HMMs, so table 3 probably underestimates the error reduction our system provides
over the best published GMM-based approach.
7
Conclusions and Future Work
We have presented a new deep architecture for phone recognition that combines a mcRBM feature
extraction module with a standard DBN. Our approach attacks both the representational inefficiency
issues of GMMs and an important limitation of previous work applying DBNs to phone recognition.
The incorporation of features extracted by a mcRBM into an approach similar to that of [5] produces
results on speaker-independent TIMIT superior to those that have been reported to date. However,
DBN-based acoustic modeling approaches are still in their infancy and many important research
questions remain. During the fine-tuning, one could imagine backpropagating through the decoder
itself and optimizing an objective function more closely related to the phone error rate. Since the
pretraining procedure can make use of large quantities of completely unlabeled data, leveraging
untranscribed speech data on a large scale might allow our approach to be even more robust to
inter-speaker acoustic variations and would certainly be an interesting avenue of future work.
References
[1] S. Young, ?Statistical modeling in continuous speech recognition (CSR),? in UAI ?01: Proceedings of the
17th Conference in Uncertainty in Artificial Intelligence, San Francisco, CA, USA, 2001, pp. 562?571,
Morgan Kaufmann Publishers Inc.
[2] C. K. I. Williams, ?How to pretend that correlated variables are independent by using difference observations,? Neural Comput., vol. 17, no. 1, pp. 1?6, 2005.
[3] J.S. Bridle, ?Towards better understanding of the model implied by the use of dynamic features in
HMMs,? in Proceedings of the International Conference on Spoken Language Processing, 2004, pp.
725?728.
[4] K. C. Sim and M. J. F. Gales, ?Minimum phone error training of precision matrix models,? IEEE
Transactions on Audio, Speech & Language Processing, vol. 14, no. 3, pp. 882?889, 2006.
[5] A. Mohamed, G. E. Dahl, and G. E. Hinton, ?Deep belief networks for phone recognition,? in NIPS
Workshop on Deep Learning for Speech Recognition and Related Applications, 2009.
[6] M. Ranzato and G. Hinton, ?Modeling pixel means and covariances using factorized third-order boltzmann machines,? in Proc. of Computer Vision and Pattern Recognition Conference (CVPR 2010), 2010.
[7] T. N. Sainath, B. Ramabhadran, and M. Picheny, ?An exploration of large vocabulary tools for small
vocabulary phonetic recognition,? in IEEE Automatic Speech Recognition and Understanding Workshop,
2009.
8
[8] G. E. Hinton, S. Osindero, and Y. Teh, ?A fast learning algorithm for deep belief nets,? Neural Computation, vol. 18, pp. 1527?1554, 2006.
[9] N. Morgan and H. Bourlard, ?Continuous speech recognition,? Signal Processing Magazine, IEEE, vol.
12, no. 3, pp. 24 ?42, may 1995.
[10] M. Ranzato, A. Krizhevsky, and G. Hinton, ?Factored 3-way restricted Boltzmann machines for modeling
natural images,? in Proceedings of the International Conference on Artificial Intelligence and Statistics,
2010, vol. 13.
[11] R. M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag New York, Inc., Secaucus, NJ,
USA, 1996.
[12] R. M. Neal, ?Connectionist learning of belief networks,? Artificial Intelligence, vol. 56, no. 1, pp. 71?113,
1992.
[13] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, ?Learning representations by back-propagating
errors,? Nature, vol. 323, no. 6088, pp. 533?536, 1986.
[14] K. F. Lee and H. W. Hon, ?Speaker-independent phone recognition using hidden markov models,? IEEE
Transactions on Audio, Speech & Language Processing, vol. 37, no. 11, pp. 1641?1648, 1989.
[15] V. Mnih, ?Cudamat: a CUDA-based matrix class for python,? Tech. Rep. UTML TR 2009-004, Department of Computer Science, University of Toronto, November 2009.
[16] V. Nair and G. E. Hinton, ?3-d object recognition with deep belief nets,? in Advances in Neural Information Processing Systems 22, Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta,
Eds., 2009, pp. 1339?1347.
[17] V. V. Digalakis, M. Ostendorf, and J. R. Rohlicek, ?Fast algorithms for phone classification and recognition using segment-based models,? IEEE Transactions on Signal Processing, vol. 40, pp. 2885?2896,
1992.
[18] J. Morris and E. Fosler-Lussier, ?Combining phonetic attributes using conditional random fields,? in
Proc. Interspeech, 2006, pp. 597?600.
[19] F. Sha and L. Saul, ?Large margin gaussian mixture modeling for phonetic classification and recognition,?
in Proc. ICASSP, 2006, pp. 265?268.
[20] Y. Hifny and S. Renals, ?Speech recognition using augmented conditional random fields,? IEEE Transactions on Audio, Speech & Language Processing, vol. 17, no. 2, pp. 354?365, 2009.
[21] A. Robinson, ?An application of recurrent nets to phone probability estimation,? IEEE Transactions on
Neural Networks, vol. 5, no. 2, pp. 298?305, 1994.
[22] J. Ming and F. J. Smith, ?Improved phone recognition using bayesian triphone models,? in Proc. ICASSP,
1998, pp. 409?412.
[23] L. Deng and D. Yu, ?Use of differential cepstra as acoustic features in hidden trajectory modelling for
phonetic recognition,? in Proc. ICASSP, 2007, pp. 445?448.
[24] A. Halberstadt and J. Glass, ?Heterogeneous measurements and multiple classifiers for speech recognition,? in Proc. ICSLP, 1998.
9
| 4169 |@word inversion:1 bigram:2 norm:2 confirms:1 covariance:21 pressure:2 dramatic:1 pick:1 tr:1 harder:1 reduction:1 initial:1 configuration:2 inefficiency:2 efficacy:1 substitution:1 seriously:1 activation:6 must:4 visible:19 partition:1 speakerindependent:1 ldc93s1:1 informative:1 utml:1 remove:2 designed:1 plot:1 update:3 v:1 generative:2 greedy:1 intelligence:3 smith:1 core:5 record:1 provides:2 toronto:3 successive:1 attack:2 five:1 burst:1 constructed:2 unacceptable:1 become:1 differential:1 pairing:1 compose:1 combine:2 inter:1 upenn:1 crbm:9 inspired:1 ming:1 inappropriate:1 window:2 increasing:1 begin:1 xx:1 factorized:1 what:3 tying:2 spoken:1 transformation:1 nj:1 temporal:4 mitigate:1 every:3 combat:1 growth:2 filterbank:3 demonstrates:1 classifier:2 unit:59 producing:1 positive:2 before:3 local:3 treat:2 mistake:1 consequence:1 representationally:1 might:4 suggests:1 specifying:1 hmms:8 directed:2 practical:1 testing:1 practice:1 implement:1 backpropagation:5 procedure:4 convenient:1 word:2 pre:2 get:1 unlabeled:1 context:4 applying:1 optimize:1 function2:1 equivalent:2 www:1 straightforward:1 punctuated:1 starting:5 williams:3 sainath:1 simplicity:1 halberstadt:1 assigns:2 immediately:1 factored:2 rule:1 htms:1 grbm:11 anyway:1 variation:2 dbns:8 pt:2 imagine:1 massive:1 exact:2 magazine:1 us:1 agreement:1 associate:1 rumelhart:1 recognition:30 expensive:2 updating:1 labeled:1 database:1 observed:1 module:1 region:2 culotta:1 connected:1 ranzato:4 removed:4 highest:1 substantial:1 vanishes:1 insertion:2 ideally:1 dynamic:1 trained:7 segment:1 compromise:2 serve:1 creates:1 purely:1 efficiency:3 basis:1 completely:2 icassp:3 represented:1 stacked:2 train:3 distinct:1 forced:3 fast:2 monte:2 artificial:3 h0:1 whose:3 emerged:1 encoded:1 valued:2 larger:5 cvpr:1 otherwise:1 reconstruct:1 statistic:1 transform:1 itself:1 final:1 sequence:6 advantage:1 net:10 melfrequency:1 reconstruction:2 took:4 interaction:2 product:2 renals:1 neighboring:1 combining:1 date:3 achieve:2 representational:4 secaucus:1 normalize:1 exploiting:1 jsp:1 produce:6 object:1 help:1 spent:1 recurrent:2 augmenting:1 propagating:1 pose:1 h3:1 cudamat:1 noticeable:1 sim:1 sa:1 c:1 wonderful:1 pkf:1 closely:1 attribute:1 stochastic:1 exploration:1 enable:1 noticeably:1 argued:1 icslp:1 wall:1 preliminary:1 really:1 hold:1 sufficiently:1 normal:1 viterbi:2 trigram:2 achieves:1 omitted:1 estimation:1 proc:6 label:4 superposition:1 sensitive:1 tool:1 clearly:1 gaussian:9 always:1 modified:1 avoid:1 varying:1 crossentropy:1 derived:1 emission:3 improvement:4 bernoulli:2 likelihood:1 modelling:1 hk:4 contrast:1 tech:1 baseline:3 glass:1 inference:1 dependent:1 factoring:2 squaring:1 typically:4 eliminate:1 unlikely:1 hidden:45 interested:1 pixel:1 issue:4 classification:2 hon:1 development:5 art:1 constrained:1 initialize:1 softmax:2 field:5 construct:1 once:7 equal:1 extraction:4 identical:2 yu:1 unsupervised:2 nearly:1 future:2 cdhmm:2 connectionist:1 serious:1 modern:1 preserve:1 densely:1 individual:2 phase:1 ourselves:1 highly:1 mnih:1 certainly:2 alignment:2 weakness:3 introduces:1 mixture:13 analyzed:1 chain:1 edge:1 capable:3 necessary:1 tree:1 divide:1 continuing:1 re:1 desired:1 monophone:2 increased:1 column:2 modeling:14 earlier:1 grbms:5 dev:1 stacking:2 entry:4 krizhevsky:1 successful:1 osindero:1 graphic:1 reported:4 combined:2 density:2 fundamental:1 international:2 retain:1 lee:1 off:2 decoding:2 together:1 connectivity:1 augmentation:2 central:1 possibly:1 gale:1 positivity:1 warped:1 derivative:4 potential:1 includes:1 coefficient:2 inc:2 satisfy:1 vi:2 depends:2 leveled:1 multiplicative:1 performed:1 later:1 break:1 wm:1 sort:2 option:2 timit:13 contribution:1 phonetically:1 variance:1 kaufmann:1 yield:1 bayesian:3 produced:1 carlo:2 trajectory:1 published:4 processor:1 simultaneous:1 ed:1 against:1 underestimate:1 energy:11 rbms:5 frequency:4 mohamed:2 pp:17 obvious:1 rbm:13 hamming:1 propagated:1 bridle:1 tunable:1 dataset:2 wh:1 dimensionality:1 sophisticated:1 carefully:1 actually:1 back:1 feed:1 rif:1 higher:1 dt:1 symptomatic:1 supervised:2 improved:1 furthermore:1 just:2 correlation:1 rahman:1 expressive:1 ostendorf:1 overlapping:2 defines:1 logistic:1 perhaps:1 facilitate:1 effect:5 usa:2 hence:1 assigned:1 neal:2 deal:4 conditionally:3 adjacent:1 during:4 interspeech:1 maintained:1 speaker:12 mel:2 backpropagating:1 cosine:1 m:3 trying:1 evident:1 demonstrate:1 l1:1 bring:1 image:3 superior:3 common:2 specialized:1 sigmoid:1 exponentially:1 insensitive:1 elementwise:2 refer:1 measurement:1 smoothness:3 automatic:2 tuning:6 dbn:33 enjoyed:1 inclusion:1 zeroing:1 language:9 had:4 mfccs:1 moving:1 stable:1 impressive:1 longer:2 whitening:2 add:3 segmental:1 curvature:1 posterior:5 own:2 recent:2 optimizing:1 phone:31 phonetic:4 nvidia:1 verlag:1 outperforming:1 success:2 binary:2 vt:3 rep:1 scoring:5 devset:2 morgan:2 minimum:2 george:1 unrestricted:1 additional:3 employed:2 deng:1 recognized:1 triphone:5 signal:4 violate:1 multiple:2 infer:2 long:1 prediction:1 variant:1 crop:1 heterogeneous:2 vision:2 normalization:1 represent:1 whereas:2 fine:6 decreased:1 diagram:1 publisher:1 w2:1 unlike:1 tri:1 probably:1 pooling:1 tend:1 isolate:1 thing:1 leveraging:1 gmms:4 seem:2 lafferty:1 call:1 extracting:2 ideal:1 revealed:1 bengio:1 enough:1 automated:1 variety:1 independence:3 fit:1 w3:1 architecture:10 competing:1 restrict:2 reduce:1 idea:1 avenue:1 fosler:1 shift:1 pca:2 akin:1 speech:21 york:1 cause:3 pretraining:3 deep:17 useful:6 clear:1 tune:1 ph:1 morris:1 generate:1 specifies:2 http:1 exist:3 cuda:1 delta:2 per:12 write:1 discrete:1 shall:2 vol:11 group:2 terminology:1 gmm:7 dahl:2 merely:1 fraction:1 sum:1 parameterized:1 powerful:2 uncertainty:1 reporting:1 place:1 decision:3 scaling:1 comparable:1 layer:33 ct:1 yielded:1 activity:2 rohlicek:1 constraint:5 incorporation:1 constrain:4 cepstra:1 performing:2 inhibits:1 gpus:1 relatively:1 department:2 according:2 describes:1 across:1 em:5 beneficial:1 remain:1 restricted:9 computationally:1 equation:5 remains:1 previously:1 count:1 fail:1 eventually:1 needed:1 confounded:1 gaussians:5 eight:1 worthwhile:1 away:3 spectral:3 enforce:2 generic:1 abandoned:1 assumes:1 clustering:1 denotes:4 top:3 cf:1 pretend:1 concatenated:2 especially:1 ramabhadran:1 implied:2 move:2 objective:1 added:1 question:1 quantity:1 strategy:1 primary:1 dependence:1 rt:2 diagonal:10 sha:1 exhibit:2 gradient:1 distance:1 mapped:1 hmm:12 sensible:1 decoder:4 outer:1 seven:1 extent:1 length:1 difficult:3 hmc:1 setup:2 potentially:1 negative:1 boltzmann:10 perform:4 shallower:1 teh:1 observation:1 markov:5 finite:1 november:1 hinton:8 variability:1 severity:1 frame:14 varied:1 stack:3 smoothed:1 arbitrary:1 expressiveness:1 csr:1 introduced:2 required:2 connection:2 sentence:1 optimized:1 catalog:1 acoustic:17 learned:1 deletion:1 hour:1 nip:1 robinson:1 able:2 beyond:1 pattern:1 sparsity:3 challenge:1 belief:13 ldc:1 power:1 natural:3 treated:1 force:1 hybrid:1 turning:1 bourlard:1 digalakis:1 representing:1 improve:3 library:1 created:1 extract:2 utterance:1 coupled:1 prior:2 literature:3 l2:1 epoch:8 understanding:2 python:1 determining:1 loss:1 bear:1 discriminatively:2 interesting:1 limitation:2 geoffrey:1 triple:1 abdel:1 h2:1 integrate:1 sufficient:1 proxy:1 cd:1 course:2 succinctly:1 free:1 silence:5 bias:5 allow:3 deeper:1 explaining:3 saul:1 face:1 cepstral:2 taking:1 fifth:1 sparse:1 distinctly:1 distributed:2 depth:3 dimension:4 vocabulary:3 ending:5 rich:2 untranscribed:1 forward:1 disturbing:1 preprocessing:3 san:1 testset:2 ec:3 far:1 transaction:5 picheny:1 emphasize:1 implicitly:1 transcription:1 global:2 active:1 overfitting:1 uai:1 corpus:1 unnecessary:1 francisco:1 consuming:1 discriminative:1 don:1 continuous:4 table:8 additionally:1 learn:3 nature:1 robust:1 ca:1 obtaining:1 schuurmans:1 excellent:1 investigated:1 marc:1 vj:2 diag:2 did:4 main:1 aurelio:1 whole:1 noise:1 hyperparameters:1 tesla:1 complementary:1 augmented:4 cubic:1 precision:18 sub:1 explicit:1 wish:2 comput:1 infancy:1 third:2 extractor:2 young:1 down:1 minute:4 specific:1 mitigates:1 cdhmms:1 explored:1 dominates:1 normalizing:1 essential:1 workshop:2 restricting:2 magnitude:1 mcrbm:43 perceptually:1 downward:1 margin:2 easier:1 locality:1 backpropagate:2 explore:1 lussier:1 doubling:1 springer:1 extracted:1 nair:1 conditional:12 identity:1 viewed:1 goal:1 marked:1 towards:1 shared:2 change:1 included:1 determined:1 principal:1 total:1 tendency:1 experimental:1 rarely:1 violated:2 accelerated:1 incorporate:1 audio:3 phenomenon:1 correlated:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.